#juju 2011-10-10
<hazmat> jason_, it can be any string re admin-secret
<hazmat> jason_, i assume for bootstrap?  if you run juju -v bootstrap .. you should get a traceback on error, that you can paste to http://paste.ubuntu.com and paste the link here
<hazmat> niemeyer, sounds good
<jason_> hazmat, http://paste.ubuntu.com/705195/
<hazmat> jason_, it looks like your orchestra server isn't responding to api requests
<hazmat> the traceback is kinda of short.. so its hard to be sure, but its either the orchestra server isn't responding or the webdav server isn't up
<jason_> I can to ssh to it, and I was able to pxe boot a system from it
<jason_> now, in the sample, the address is marked off by trios of single quotes
<jason_> '''
<jason_> do those not belong -- I tried without, and got a different, longer error
<hazmat> jason_, yeah.. it looks like its a problem with the webdav server when i look at the code
<hazmat> jason_, its the first thing talked to during bootstrap to check if a juju node already exists
<hazmat> jason_, not sure what you mean by the triple quotes
<jason_> storage-url: '''http://192.168.1.103/webdav'''
<hazmat> jason_, that's not correct yaml afaics
<hazmat> jason_, single quotes around it are fine
<hazmat> else it parses to something broken for use as an address
<hazmat> yaml.load("storage-url: '''http://192.168.1.103/webdav'''") -> {'storage-url': "'http://192.168.1.103/webdav'"}
<hazmat> jason_, it doesn't even need quotes for a string in yaml for this case
<hazmat> jason_, which example are you referencing?
<jason_> hazmat, https://help.ubuntu.com/community/Orchestra/JuJu
<hazmat> jason_, hmm.. ic. thanks, i hadn't realized this was documented somewhere..  first i'd try removing all the triple quotes
<jason_> hazmat, the error w/o http://paste.ubuntu.com/705202/
<hazmat> jason_, i'm not really all that familiar with it.. but my understanding is that first you need to define the management classes in cobbler that associate to machines you want to use, and those should match the management classes you have specified in juju's environment.yaml file.. the error is because juju queried the cobbler server and didn't find any machines matching the specified management classes.
<jason_> hazmat,  mmm, I think I might know what to try next, thanks. I've got to run for a while -- thanks
<hazmat> jason_, np.. good luck, and feel free to pop by if you have any more problems with it, there will be some folks around tomorrow who have more experience using juju + orchestra
<jason_> hazmat, cool, thanks
<jason_> hazmat, hey, got past that step, just had to netboot-enable it
<hazmat> jason_, awesome
<jason_> hazmat, heh, meanwhile, my wife is going to kill me if I don't quit messing with this! :)
<hazmat> hehe ;-)
 * hazmat knows that feeling
<hazmat> SpamapS, your updates are breaking the charms.. the revision file is missing
<SpamapS> hazmat: so I still have to manage the revision?! :-(
 * SpamapS had hoped it was optional
<hazmat> SpamapS, its in a separate file
<SpamapS> Yeah I was hoping that was an optional file
<hazmat> 'revision' in the charm
<hazmat> SpamapS,  it is during dev (it will autocreate it for you)
<SpamapS> hazmat: so I need a universal pre-commit that checks for the file
<SpamapS> hazmat: or we have to disallow general direct access to bzr.
<hazmat> SpamapS, the old way is backwards compatible
<hazmat> defined in metadata.yaml
<SpamapS> hazmat: indeed, but it generates *copious* warnings
<hazmat> SpamapS, yeah.. for the 'charmers' group that might be good
<hazmat> probably should have used a deprecation warning.. but that's still once per process.
<hazmat> and random depending on which one you hit.. i guess the repo.find hits all of them.
<SpamapS> yep
<niemeyer> Hallo!
<rog> niemeyer: mornin'
<rog> niemeyer: i tried to push my doc changes to the URL you suggested (lp:~juju/juju/trunk) and i still get a "read-only transport" error
<niemeyer> rog: Can you please paste it?
<_mup_> Bug #871743 was filed: orchestra instance status is not visible <juju:New> < https://launchpad.net/bugs/871743 >
<rog> http://paste.ubuntu.com/705380/
<rog> niemeyer: ^
<_mup_> Bug #871745 was filed: orchestra: ks_meta not cleared <juju:New> < https://launchpad.net/bugs/871745 >
<niemeyer> rog: This is wrong in a few different ways
<niemeyer> rog: You can simply pick a branch and push onto trunk
<niemeyer> rog: trunk has most certainly evolves since you created this branch
<rog> can? or can't?
<niemeyer> rog: Sorry, can not
<rog> ok
<niemeyer> rog: The way to go is to have a local trunk
<rog> i do
<rog> ok, and merge into that
<niemeyer> rog: Then, bzr pull into it
<rog> then push it
<niemeyer> rog: First pull from the real trunk
<rog> yup
<niemeyer> rog: Then, merge onto that
<niemeyer> rog: and _test_ it!
<niemeyer> rog: Then, commit and push
<rog> even though i've only changed the docs?
<niemeyer> rog: If you're using a bound branch, you actually don't have to push
<rog> just commit?
<niemeyer> rog: But it's slightly easier to screw things up too
<niemeyer> rog: Yeah
<niemeyer> rog: For doc-only changes, build the docs again at least
<rog> how do i run the juju test suite BTW?
<niemeyer> rog: and see how it looks in the browser
<niemeyer> rog: ./test
<rog> ok
<rog> niemeyer: i still get the same error. i tried it from scratch: http://paste.ubuntu.com/705386/
<rog> i'm sure i'm still doing something wrong :-)
<niemeyer> rog: Hmm..
<niemeyer> rog: Are you not part of the team?
 * niemeyer checks
<niemeyer> rog: LOL, yeah
<niemeyer> rog: Ok, try now
<rog> ah, that works.
<niemeyer> rog: Sorry about that
<rog> BTW what *is* the difference between lp:juju and lp:~juju/juju/trunk
<rog> ?
<niemeyer> rog: None, assuming that the former has a default series that points to the given branch
<niemeyer> rog: Which is indeed the case
<niemeyer> rog: Which series a project or a series points to is config-defined
<rog> niemeyer: ok. i understood something different from one of your previous remarks
<niemeyer> Erm
<niemeyer> rog: Which branch a project or a series points to is config-defined
<niemeyer> rog: Which was?
<rog> niemeyer: http://paste.ubuntu.com/705389/
<rog> (i'd used the URL lp:juju there)
<niemeyer> rog: You were using http
<niemeyer> rog: or that was my understanding at least
<niemeyer> rog: if it wasn't, then I looked the URL incorrectly
<rog> niemeyer: i don't think i was. i just checked. i was using lp:juju
<niemeyer> rog: Ok, sorry then
<rog> i woz confuzed
<rog> np
<_mup_> Bug #871773 was filed: machine_data needs a schema <juju:New> < https://launchpad.net/bugs/871773 >
<_mup_> juju/go-store r16 committed by gustavo@niemeyer.net
<_mup_> Merging from go-new-revisions.
<_mup_> juju/go-store r17 committed by gustavo@niemeyer.net
<_mup_> Renamed NewURL to ParseURL, and added MustParseURL.
<hazmat> SpamapS, fwiw i'm marking bugs for oneiric against the distribution and milestone oneiric updates
<hazmat> niemeyer, fwereade_ we don't have a separate oneiric series atm.. so merge order is critical to ensure we don't have new featues added before bugfixes for oneiric..
<niemeyer> hazmat: Agreed
<niemeyer> hazmat: What's the context?
<hazmat> niemeyer, getting 399 and local provider storage into oneiric..
<fwereade_> hazmat: I've just been quietly working away and MPing against florence bugs, I hadn't been planning to merge anything until I had some idea what was going on
<hazmat> niemeyer, but it also applies to any other bugs we get that should be fixed against oneiric, where things become a bit harder.
<hazmat> if its post oneiric release, and we're onto florence feature dev
<hazmat> unless we're saying we're  only going to do one sru update, and we'll save doing a maintained stabled till 12.04
<hazmat> we'll at least get the practice in of doing an SRU before 12.04 which is nice
<hazmat> the question is if we need or want to do more than one
<hazmat> probably better is publishing a stablish ppa
<niemeyer> hazmat: I don't know, but the focus ATM is indeed on oneiric
<hazmat> i'll ask again on list for wider feedback
<niemeyer> hazmat: I think it depends quite a bit on how the 11.10 => 12.04 will look like
<niemeyer> hazmat: There are important things we have to decide on that will modify the way we work on that period
<niemeyer> hazmat: Regarding local-provider-storage, I wish we had gone with a webdav implementation from day zero
<niemeyer> hazmat: But if you've tested that branch and it's solving the problem for the release right now, +1
<hazmat> niemeyer, indeed, both i and jim have tested it
<hazmat> it solves the issue jamespage reported
<niemeyer> hazmat: Cool, let's go with it then and get the problem fied
<niemeyer> fixed
<niemeyer> hazmat: We can reevaluate the approach in the future
<hazmat> niemeyer, what does webdav bring to the table here? and which webdav impl?
<niemeyer> hazmat: It brings commonality between multiple providers, and it also brings privacy
<niemeyer> hazmat: We already have webdav support in orchestra
<fwereade_> niemeyer, hazmat: kinda-sorta: I only just MPed the version with authentication, and I think that's tied to the apache2 module (I forget what, but it skips one of the possible fields)
<hazmat> privacy between multi users on a machine with sudo root access, against atm public resources is  a bit of a red herring, multiple environments on a single machine, means a cross provider resource (ie. host config alteration) or something costly like multiple apache2 webdavs.
<niemeyer> hazmat: We already have it that code working. Just needed to bring up any existent webdav server properly configured.
<hazmat> with the new local provider storage, the fetch side is the same as what're using now against any url.. the push side is the same what we use in tests (disk storage), and is trivial
<niemeyer> hazmat: Yeah, the fetch side is the same.  The server side is disk store + twistd + wrapper to return URL on twistd.
<niemeyer> hazmat: I hope that in the future that becomes a single server, for local, for orchestra, and for EC2.
<niemeyer> Lunch, biab
<fwereade_> later all
<hazmat> fwereade_, cheers
 * hazmat wonders if google's dart aka java+coffeescript is really useful
 * rog is quite glad that dart isn't stepping on Go's toes
<_mup_> juju/trunk r401 committed by kapil.thangavelu@canonical.com
<_mup_> merge local-provider-storage [r=jimbaker,niemeyer][f=869945]
<_mup_> Make local provider storage network accessible to allow for unit access, fixes problems
<_mup_> with charm upgrade.
 * hazmat lunches
<_Groo_> hi/2 all
<_Groo_> could anyone help me? i want to test juju but i dont have a AWS key...
<_Groo_> how can i bootstrap juju without sucha  key? can i emulate one or make juju ignore it?
<jimbaker>   _Groo_ , consider using the local provider setup
<niemeyer> rog: I'm actually surprised by _how much_ it's not stepping on it
<SpamapS> _Groo_: you can also use openstack
<rog> niemeyer: yeah, i thought there might be some influences going on there, but there don't appear to be
<rog> niemeyer: := and go-order type declarations would have been a nice touch...
<niemeyer> rog: Interfaces as well, at the very least
<rog> niemeyer: yeah.
<rog> niemeyer: from my 30s look at it the type and object model seems substantially that of java
<rog> which does seem a bit... retro
<niemeyer> rog: Yeah, it feels quite a bit like Javascript+Java
<niemeyer> rog: Which is unsurprising given the stakeholders' background
<rog> on the positive side, it means i don't have an urge to waste time looking through it in detail...
<rog> niemeyer: yeah. seems like they could've been a little bit more radical. but maybe they like that space.
<SpamapS> So, we may need to do one last upload to juju in 11.10
<SpamapS> oh wait, never mind
<SpamapS> I was just thinking it uses the PPA by default
<SpamapS> but it doesn't! w00t!
<niemeyer> SpamapS: Yeah, it's.. magic! :)
<SpamapS> niemeyer: I keep thinking that it would be better to have the bootstrap process build a repo with juju in it.. so you always get the same juju regardless of the archive/PPA state.
<rog> niemeyer: i'm off. i sent you a come back on that review by the way. all done save one query.
<niemeyer> rog: Cool
<niemeyer> rog: Thanks, and a good evening
<rog> see you tomorrow
<jimbaker> btw, juju.ubuntu.com/docs is not being updated
 * niemeyer tries to sort out ordering of updates in the store
<SpamapS> hazmat: btw, we have some more bugs to fix in txaws ..
<SpamapS> hazmat: the provisioning agent also needs to be a little more robust when handling errors from txaws ..
<hazmat> SpamapS, hmm.. this is around the machine termination work?
<SpamapS> We had a failure during the demo where sometimes expose would try to list instances, fail, and then get an error raised because it tried to iterate on None
<hazmat> ugh
<SpamapS> after that, the provisioning agent would not do *anything*
<SpamapS> kill/restart it would work
<SpamapS> luckily we had it a few times in rehearsal so I had Adam watching for it during the demo
<SpamapS> It was about 1 in 10 times.. so we just crossed our fingers and went for it. :-P
<hazmat> glad the demo went well, sounds like it rocked
<SpamapS> It went over great
<SpamapS> hadoop w/ 7 nodes in under 5 minutes. :)
<jimbaker> SpamapS, there's a related outstanding issue (bug824279)
<jimbaker> bug 824279, please ;)
<_mup_> Bug #824279: Security group functions for EC2 provider should retry <juju:New> < https://launchpad.net/bugs/824279 >
<SpamapS> jimbaker: that is definitely related, tho I think the bigger problem is that errors are allowed to disable the provisioning agent
<jimbaker> from the sound of it, openstack occasionally returns a payload txaws can't parse, and this bubbles up inappropriately
<SpamapS> jimbaker: exactly
<jimbaker> SpamapS, yeah, i never liked that architecture
<SpamapS> txaws should be smarter..
<SpamapS> but we should be more defensive
<jimbaker> SpamapS, precisely
<niemeyer> SpamapS: That's a slightly spread out issue indeed, and it's the sort of thing I hope to get cleaned up in the 11.10 => 12.04 timeline
<SpamapS> Want to make sure its well reported so it will be easier to fix.
<jimbaker> SpamapS, i think one possibility here is to consider the provisioning agent, when it does fall over, is something that can be restarted. ideally with ha. but there has to be a last line of defense. not you watching it ;)
<SpamapS> yeah, what was weird was that it didn't exit.. it just stopped doing anything
<jimbaker> SpamapS, ahh, so not even failfast
<SpamapS> My guess was that some watch/callback/etc. needed to be re-added in a 'finally:' clause somewhere
<SpamapS> Its quite reproducible..
<niemeyer> SpamapS: I've seen that happening before.. a deferred that never fires can cause that
<SpamapS> just digging through bugs now to see if there's a dupe
<SpamapS> (while also ISO testing the 11.10 release :)
<niemeyer> SpamapS: My plan for 12.04 encompasses fixes for all of that, FWIW. We still have to talk about it to see if we're all buying into it, though.
<hazmat> SpamapS, is bug 863400 addressed already?
<_mup_> Bug #863400: examples repository is not installed from PPA <juju:In Progress by clint-fewbar> < https://launchpad.net/bugs/863400 >
<SpamapS> hazmat: no, its just blocked on me making the packaging from 11.10 backportable to lucid/maverick
<SpamapS> trivial, but not done
<SpamapS> BTW the reason they're not there is a missed file during the rename
<hazmat> ok.. i'm going to move it to the florence milestone.. i'm trying to close out eureka
* hazmat changed the topic of #juju to: http://j.mp/juju-florence http://j.mp/juju-docs http://afgen.com/juju.html http://j.mp/irclog
<hazmat> florence is open
<niemeyer> WOOHAY
<jimbaker> very nice
<jimbaker> it was very clear to me in working on the provisioning agent that it's very difficult to get a complex watch setup that is correct in twisted. so moving to golang is definitely something i continue to support
<hazmat> jimbaker, umm.. add an errBack ?
<hazmat> for the error handlng that is
<hazmat> SpamapS, do you have any of the logs from the provisoning agents by chance?
<SpamapS> hazmat: no. :(
<SpamapS> kept meaning to save one off
<SpamapS> but its really easy to reproduce
<SpamapS> just return None from describe_instances
<SpamapS> can probably write a test actually
<SpamapS> hazmat: rather, instead of returning None, raise an error
<jimbaker> hazmat, sure, we could do that. i suspect (but i would need a log to confirm) that the problem would not be caught by the exception handler L461 in open_close_ports_on_machine in juju.agents.provision
<jimbaker> actually that's just illustrative of where it should be caught, since in the specific case there, it's just to ignore something uninteresting
<SpamapS> wouldn't the appropriate way be to handle unexpected errors in any underlying library call?
<jimbaker> SpamapS, yes, to be more defensive, or failfast, when doing something external
<hazmat> SpamapS, religious backwards compatible will leave important features on the table.. zk security comes to mind as a pending one
<jimbaker> that method does isolate any txaws calls related to expose, but it can be called in the context of a watch
<niemeyer> SpamapS: Yes, that's really all that there is to it. Certain practices make that intrinsic, while others make that error prone.
<SpamapS> hazmat: agreed 100%, I think it will slow down dev a lot.
<SpamapS> hazmat: I'm not sure how we can manage to keep putting new versions of juju in -updates without it though.. otherwise we're going to get angry people who have dead environments.
<hazmat> SpamapS, i'm not sure why we should try past an initial SRU (which is good experience for 12.04)
<hazmat> have a stable ppa for people who want new features
<hazmat> if we want to have additional fixes for 11.10, we should have a release branch
<SpamapS> Thats typically how its done, yes.
<SpamapS> However I think we're being asked to think outside the box here.
<niemeyer> SpamapS, hazmat: We may well live the entire 12.04 timeframe with a compatible code base
<niemeyer> SpamapS, hazmat: It really depends on how we're running next
<SpamapS> There are two very distinct levels of compatibility to think about too
<SpamapS> There's the running environment, and the charms.
<SpamapS> And a third, which is the CLI
<niemeyer> SpamapS, hazmat: People must be able to trust it by 12.04, so the focus is on making it solid
<SpamapS> Yeah, nobody's arguing that we shouldn't do some radical things to make juju a production ready product for at least a few use cases by 12.04. The current question is what to do about SRU
<niemeyer> SpamapS: I don't think we need any incompatible changes to make it production-ready.
<niemeyer> SpamapS: SRUs can flow from trunk before we break compatibility.
<SpamapS> niemeyer: adding ZK security would probably be difficult to do w/o breaking a running env
<SpamapS> unless we are careful in making things gracefully degrade on the ZK schema version
<niemeyer> SpamapS: That's not critical for putting it in production, to be honest, but it'd be possible still
<SpamapS> niemeyer: alright, I'l have to take your word on both of those, as I'm not entirely familiar with the current security model
<SpamapS> The previous understanding I had was that any agent can change and view any part of ZK
<SpamapS> thats not even close to production acceptible.
<niemeyer> SpamapS: The main point I'm raising here is that I don't even think we should be addressing zk security right now
<SpamapS> yeah I believe you that there's no need for it.. I just don't understand it well enough to speak to it.
<niemeyer> SpamapS: "production" means different things to different people.
<hazmat> right now the story there is no security, there is an implementation on deck that can be finished up with in an additional week or two
<hazmat> but its moot if we go go
<niemeyer> SpamapS: I'd certainly deploy juju with agents having access to zk
<niemeyer> SpamapS: In production
<niemeyer> SpamapS: I'd not deploy it without being able to reboot
<niemeyer> SpamapS: etc
<SpamapS> I'd put rebooting above zk security too.
<hazmat> niemeyer, iotw. there are other issues with higher priority
<niemeyer> hazmat: You mean in the same words, yeah :-)
<SpamapS> I'd just be hesitant to deploy something that makes it so one root compromise enables a global root compromise.
<SpamapS> it can be "production part deux" ;)
<niemeyer> SpamapS: Precisely
<niemeyer> SpamapS: It's not that we disagree, it's just that that's critical critical, and critical :-)
<niemeyer> s/that that's/that there's/
<hazmat> i'd still like to fix finish the merge on the security stuff, just so its not pending.. the further away the further the context switch
<niemeyer> hazmat: I'm not sure it's a good idea, but we should definitely talk about it
<hazmat> i should probably just update it to current trunk and leave it pending
<hazmat> niemeyer, i'm fine with holding off on it for now, but it is still something i'd like to see for 12.04
<SpamapS> wouldn't it be possible to simply put a version constraint on it.. if schema_version >= 3: ... else: warn("NO SECURITY!")
<hazmat> but agreed reboots and other prod issues are more important for now
<niemeyer> hazmat: It's something I want to see in too, for sure, but we have to put the work in context of the overall strategy
<hazmat> niemeyer, we haven't done much overall strategy discussion for what we want to accomplish this cycle
<niemeyer> hazmat: ROTFL
<SpamapS> niemeyer: I went ahead and made 'prouction' an official tag for juju's bugs
<niemeyer> hazmat: We haven't indeed.. but please excuse me while I go back to hacking the store so I can try to get this on time. :-D
<niemeyer> hazmat: We've been consciously delaying strategy conversations since the strategy is very well known up to now
<niemeyer> hazmat: Getting 11.10 in shape
<hazmat> niemeyer, good stuff we can revisit post store/repo launch
 * hazmat returns to hacking
<niemeyer> hazmat: Once we're good on that front (I'm still not), and we breath for a couple of days maybe, we should start more serious conversations on the months ahead
<hazmat> SpamapS, btw.. pls put in a request to merge status2gource ;-) that's awesome
<hazmat> SpamapS, is there a screencast of that rocking out?
<SpamapS> hazmat: I'm hoping to just make it part of juju status.  --gource
<hazmat> SpamapS, nice
<SpamapS> its pretty simple
<SpamapS> hazmat: we had a few
<SpamapS> hazmat: hopefully we'll have full video of the actual demo
<hazmat> that would be awesome... although potentially a long wait
<SpamapS> IIRC, the OpenStack conf guys are editting
<hazmat> SpamapS, sure.. but i remember waiting a year for the surge guys to finish up last years videos..
<SpamapS> doh
<jamespage> evening all
<jamespage> I have a query re public-address/private-address
<niemeyer> JAMES PAGE!
 * jamespage waves at niemeyer
<jamespage> I've been doing a bit of work on the cassandra charm and I've switched over to using private-address/public-address
<jamespage> but I have to configure cassandra to listen on a specific IP address for cluster communication
<niemeyer> jamespage: Hmm, ok
<jamespage> I went with `unit-get private-address` - but running in ec2 this give me 'domU-12-31-39-0B-14-11.compute-1.internal'
<SpamapS> jamespage: I actually already committed changes to use public-address/private-ddress
<jamespage> which binds onto 127.0.1.1
<SpamapS> heh thats interesting
<hazmat> ugh..
<jamespage> yeah - thats what I thought :-)
<SpamapS> seems like we should be putting private *address* in that field, not private-hostname
<niemeyer> jamespage: Looks, buggy..
<SpamapS> since we have the collaboration of the provider, addresses should be useful
<niemeyer> SpamapS: This should actually be the public one for EC2, right now
<jamespage> SpamapS: I've done some work on the seed management as it was restarting cassandra *alot* when it did not need to
<SpamapS> jamespage: cool!
<SpamapS> jamespage: you can just --overwrite lp:charm/cassandra .. I was just seeing how unit-info and private/public addresses work
<SpamapS> niemeyer: huh?
<jamespage> SpamapS: wilco
<niemeyer> SpamapS: It should be the address that allows units to intercommunicate.. I guess we can use the internal 10.*.*.*
<SpamapS> jamespage: as far as binding a specific IP, is that necessary? can't you use 0.0.0.0 ? Or is it the snitch thing that needs to know its IP?
<SpamapS> niemeyer: we *must* use the internal address, or people get massive bandwidth bills
<jamespage> SpamapS: you can for the thrift interface - but not for the peering
<niemeyer> SpamapS: yeah, cool
<SpamapS> jamespage: right that makes sense.
<hazmat> jamespage, that's interesting..
<hazmat> host domU-12-31-39-0B-E0-59.compute-1.internal
<hazmat> domU-12-31-39-0B-E0-59.compute-1.internal has address 10.214.231.167
<SpamapS> jamespage: I suppose the simple way is to look it up with dig/host, instead of using gethostbyname
<hazmat> ping domU-12-31-39-0B-E0-59.compute-1.internal -> 127.0.1.1
<jamespage> hazmat: bingo!
<SpamapS> hazmat: right, thats because we always put a machine's hostname in /etc/hosts as 127.0.1.1
<niemeyer> hazmat: Cool.. we just need to resolve it differently
<niemeyer> hazmat: Or rather.. are we resolving it? /me looks at get-unit private-address
<hazmat> jamespage, if we drop the search domain from /etc/hosts it should also work
<hazmat> niemeyer, it queries the private address from the metadata server
<niemeyer> hazmat: But what's the output from the command?
<niemeyer> hazmat: an ip, or the domain name?
<hazmat> niemeyer, we can get either, we use address for ec2
<niemeyer> hazmat: Is the address an ip, or a domain name? :-)
<hazmat> er. domain
<hazmat> niemeyer, its domains for every provider except local which uses ip
<jamespage> so what should public-address get me in ec2?
<niemeyer> hazmat: That sounds good then
<SpamapS> jamespage: public would be less relevant for cassandra
<niemeyer> jamespage: The internal domain name for the local machine
<hazmat> jamespage, the public dns name for the machine
<niemeyer> Sorry, s/internal/public/ for public-address
<jamespage> so that should be a name not an address - OK
<SpamapS> I think whats important is that the behavior of private-address is well understood. I have to admit, I'd expect it to *always* give me a network address, not a hostname, but if it might do that sometimes, then charms can deal with that.
<SpamapS> jamespage: I think you have to plan for both
<niemeyer> jamespage: address is a loose term
<SpamapS> if unit-info private-addres | grep -q "[a-zA-Z]" ; then resolve_hostname ; fi
<SpamapS> hrm that breaks on IPv6 doesn't it?
<niemeyer> jamespage: ipv4 address, ipv6 address, mac address, etc, are not loose terms
<hazmat> SpamapS, that's one reason its hostname for address, to preserve ipv6 compatbility
<hazmat> ie. leave it to dns to resolve
<hazmat> except here dns is effectively broken
<SpamapS> yeah, my concern would be that sometimes it might be an IP
<niemeyer> SpamapS: We can certainly polish/improve that over time
<hazmat> SpamapS, in the local provider it is indeed an ip, since the hostname isn't routable from the host to the container
<niemeyer> SpamapS: Let's keep an eye on that and learn how people use it
<jamespage> niemeyer, that's kinda what caught me out; I've been testing in the local provider just fine
<jamespage> (appreciate its a simpler environment)
<hazmat> so the underlying problem comes from cloud-init
<hazmat> # Added by cloud-init
<hazmat> 127.0.1.1       domU-12-31-39-0B-E0-59.compute-1.internal domU-12-31-39-0B-E0-59
<SpamapS> Its not really a "problem" per se
<jamespage> switched to ec2, no peering with cassandra
<hazmat> if it didn't have the *.internal address it would just work
<SpamapS> its meant to make sure the machine can resolve its hostname
<hazmat> that is in /etc/hosts
<hazmat> SpamapS, we can do that from the second part of that line
<hazmat> hostname
<hazmat> domU-12-31-39-0B-E0-59
<hazmat> we don't need the domU-12-31-39-0B-E0-59.internal entry
<SpamapS> hazmat: right, but the FQDN needs to resolve to the local machine
<SpamapS> its actually something to do with MTA's IIRC
<hazmat> that's what causes the problem.. it doesn't need to link back to 127.0.1.1 just to the ip address?
<SpamapS> smoser knows better than I do
<hazmat> if i remove that entry from the line it all works as expected
<niemeyer> jamespage,hazmat: I think the way the cloud-init stuff is injecting the ip is a bit dubious as well,
<SpamapS> That entry is not going away
<SpamapS> I mean, raise a bug
<niemeyer> :-)
<SpamapS> IIRC its there for a good reason
<hazmat> SpamapS, but the entry would work with just -> 127.0.1.1  domU-12-31-39-0B-E0-59
<hazmat> afaics
<SpamapS> Would work for what?
<SpamapS> You're assuming you know every way that the FQDN is used
<hazmat> i dunno... probably juju ;-)
<niemeyer> SpamapS: FWIW, I don't think it's a common convention to use a loopback for the FQDN
<hazmat> i don't think fqdn -> localhost is something that should be assumed
<hazmat> its already resolvable
<SpamapS> you could turn it off
<SpamapS> We might even be setting it
<hazmat> cloud-init is setting it..
 * hazmat checks cloud-init config
<SpamapS> manage_etc_hosts must be set to skip that bit
<SpamapS> smoser: around?
<jamespage> SpamapS: you might be lucky - think he's on hols today
<SpamapS>   sanitize hosts file for system's hostname to 127.0.1.1 (LP: #802637)
<_mup_> Bug #802637: cloud-init needs to check for hostname to resolve to 127.0.1.1 <cloud-init:Fix Released by lynxman> < https://launchpad.net/bugs/802637 >
<SpamapS> doh
<SpamapS> no explanation as to why that is asserted
<hazmat> i don't see any reasoning in there on why fqdn should be aliased to localhost.. that looks like it introduced  abug
<SpamapS> but my guess based on the time frame is this was centered around orchestra starting to use cloud-init
<hazmat> m_3, ping
<hazmat> whoops
<hazmat> lynxman, ping
<hazmat> looks like scott committed it in rev 409
<lynxman> hazmat: pong
<hazmat> lynxman, do you know why fqdn needs to alias to localhost in cloud init.. ie the reasoning behing the fix for bug 802637
<_mup_> Bug #802637: cloud-init needs to check for hostname to resolve to 127.0.1.1 <cloud-init:Fix Released by lynxman> < https://launchpad.net/bugs/802637 >
<lynxman> hazmat: SpamapS: we added this to circumvent some instances not having an entry to 127.0.1.1 and follow debian guidelines
<lynxman> hazmat: since we found cases of daemons (MTA, rabbitmq, etc) that would refuse to start or delay start due to lack of hostname resolving to 127.0.1.1
<hazmat> lynxman, wouldn't hostname -> 127.0.1.1be fine
<SpamapS> resolving to 127.0.1.1, or resolving at all?
<hazmat> lynxman, is fqdn also needed?
<lynxman> SpamapS: it was an added requirement, I did hostname then we added FQDN as well, the debian guideline implies both
<SpamapS> I do recall that rabbit used to be really picky about its fqdn
<lynxman> SpamapS: just hostname did the job tbh but we want to be legal
 * hazmat remembers that as well
<lynxman> SpamapS: btw thanks for packaging juju, I've started the macports work to release it asap
<SpamapS> lynxman: sweet, I have a mac here running Lion so let me know when you're done and I can test it.
<lynxman> SpamapS: will do, I need to create the packages for the new dependencies (pydot and py-apt)
<lynxman> btw is pyapt strictly needed on non ubuntu environments?
<SpamapS> lynxman: this 127.0.1.1 thing is rather confusing.. I see this reference to it in debian's docs.. but can you point me to the guidelines you guys were following? http://www.debian.org/doc/manuals/debian-reference/ch05.en.html
<hazmat> lynxman, i don't think that fqdn is appropriate in cases where its already resolvable
<hazmat> to be aliased to localhost
<lynxman> SpamapS: exactly that, point 5.1.2
<SpamapS> lynxman: You may have to patch that stuff out, I don't think anybody had time to think about the ramifications of querying apt data on Mac OS X ;)
<lynxman> SpamapS: lol, will do
<lynxman> SpamapS: as said, several people have touched it, I did the initial implementation then adam_g and smoser
<lynxman> SpamapS: so it would do more stuff like get the fqdn from the metadata and such
<lynxman> SpamapS: I just hardcoded domain_name to localdomain to avoid conflicts in long term, but it wasn't usable on some scenarios
<hazmat> lynxman, its not.. its only used for local provider
<SpamapS> Right, so..
<SpamapS> that paragraph is really unclear to me.
<hazmat> which won't work on osx anyways
<SpamapS> hazmat: isn't it also used for juju-origin ?
<lynxman> SpamapS: to me as well, we implemented this in Dublin (city) so we went to the #debian channel to ask :)
<hazmat> SpamapS, hm.. its not.
<hazmat> SpamapS, it uses the cli for that one
<lynxman> SpamapS: this stuff generates quite a debate so hey, any opinion is welcome :)
<hazmat> SpamapS, failing to find the cli it defaults to ppa
<hazmat> actually to distro
<SpamapS> thats good
<hazmat> SpamapS, that's only the case if juju-origin is not set
<hazmat> else we just use the juju-origin value
<SpamapS> Well I'd say that there is a requirement for machines to be able to resolve their FQDN
<lynxman> SpamapS: and I'd agree
<SpamapS> I'd also say that cloud-init is in charge of that on some level because it is stepping in for netcfg
<lynxman> SpamapS: for me the important part was getting a resolvable hostname, which has bigger ramifications
<hazmat> and in the case of ec2 that's already true without the alias
<SpamapS> However, I do feel that the order of ops there is backwards when deciding whether or not to write it to /etc/hosts
<SpamapS> If the FQDN is resolvable in DNS, it must be kept *out* of /etc/hosts
<SpamapS> I think the debian reference is right..
<SpamapS> but since the resolution goes 1 -> 3, the fulfillment should go 3 -> 1
<SpamapS> I also think this may cause some issues for people using 11.10 in the cloud
<lynxman> SpamapS: so I'd ping smoser and ask his input as well, I'll be in Millbank tomorrow in case you need to get ahold of the release team :)
<SpamapS> Its probably just a high prio SRU at worst
<SpamapS> not a release blocker
 * SpamapS moves discussion to #ubuntu-cloud tho
<lynxman> SpamapS: just sayin'
<lynxman> anyhow I need to run
<lynxman> catch you guys later o/
<SpamapS> later
<jason_> Hey guys, I'm trying to troubleshoot a juju/orchestra setup, which I started with the instructions here: https://help.ubuntu.com/community/Orchestra/JuJu
<jason_> I ran the bootstrap command ok, but I'm stopped at deploying the mysql example: http://paste.ubuntu.com/705567/
<hazmat> jason_, that's removing the triple quotes in the addresses given in the example?
 * hazmat wanders off to remove the triple quotes from the wiki page
 * SpamapS filed bug 871966 to document the cloud-init discussion
<_mup_> Bug #871966: FQDN written to /etc/hosts causes problems for clustering systems <cloud-init (Ubuntu):New> < https://launchpad.net/bugs/871966 >
<jason_> hazmat, yeah, removing the triple quotes, then enabling netboot in the profile I had in orchestra got me here
<jason_> got me past the bootstrap
<jason_> I'm not 100% sure what the mechanism is supposed to be here -- I have an orchestra server and one vm installed via orchestra
<hazmat> jason_, can you wget on <your-web-dav-url>/provider-state and paste it
<hazmat> jason_, the triple quotes are gone from both the webdav url and the orchestra server i assume... but the error is the same as it was with them
<hazmat> juju will store the address of the zookeeper server into that web-dav-url
<jason_> http://paste.ubuntu.com/705571/
<jason_> oh wait
<jason_> provider state
<hazmat> which it tries to contact on deploy
<jason_> hazmat, hmm /provider_state or /webdav/provider_state are 404s
<jason_> oh
<jason_> hazmat, it's zookeeper-instances: [MTMxODEwNzQ0NS43NTE2NjE2MS4wNzExNQ]
<hazmat> so that looks good
<jason_> is orchestra supposed to be enlisting the system I created through it to be the juju server?
<hazmat> jason_, no.. juju is going to create its own server through orchestra
<hazmat> but yes the system does need to be registered to the managemnt class in orchestra and setup for netboot
<jason_> Ok, that might be an issue -- orchestra is running on virtualbox... not sure how it'd create a new system
<jason_> on its own
<jason_> the system I installed from it, I did creating a vm and pxe booting
<hazmat> SpamapS, fwereade_ if you have a moment, i'm a bit out of my element on debugging orchestraa setups
<SpamapS> ack
<hazmat> jason_, yeah.. i'm not sure that setup is going to work
<hazmat> orchestra + juju via a vm
<hazmat> maybe if the machines are off and setup for netboot
<SpamapS> jason_: so you have a system listed in orchestra, that you know pxe boots from the pxe/tftp/etc that orchestra is providing?
<SpamapS> it could work w/ a bridge net
<SpamapS> jason_: can you do a 'cobbler listvars --name=the-system-name' and pastebin it? (note that there may be sensitive stuff in there)
<jason_> SpamapS, yes, it does pxe boot from cobbler, I'll paste that
<jason_> SpamapS, No such command: listvars
 * SpamapS will RTFM instead of reading his own memory
<SpamapS> jason_: while I figure that out.. is it in the "available" management class?
<jason_> SpamapS, under management classes in the left hand menu?
<SpamapS> jason_: In the system definition itself, the last sub-menu is 'Management'
<jason_> SpamapS, I have orchestra-juju aquired and available in the selected box
<jason_> I think one of those had been in the available box, and I moved it while trying things outs
<jason_> And currently, that system is installing anew -- I kicked that off a little bit ago
<SpamapS> jason_: so the way juju's orchestra provider works, it will only grab systems that are *netboot enabled* and in the 'available-mgmt-class' from environments.yaml
<SpamapS> ahh this is it
<SpamapS> jason_: cobbler system dumpvars --name=name-of-system
<jason_> SpamapS, I get an error there: TypeError: cannot marshal None unless allow_none is enabled
<SpamapS> jason_: weird, did you run it with sudo?
<SpamapS> it may be necessary actually
<jason_> SpamapS, yes, it didn't let me otherwise
<SpamapS> ok how about sudo cobbler system list
<jason_> SpamapS, oneiric01.ubuntu.lan
<jason_> that's my guy
<SpamapS> jason_: ok, so you did 'cobbler system dumpvars --name="oneiric01.ubuntu.lan"' and got the none error?
<jason_> Ok, I had bad syntax
<SpamapS> jason_: btw, if you don't have it already, the package 'pastebinit' is useful in these instances. :)
<SpamapS> you can just | pastebinit
<jason_> ah, installing now
<niemeyer> pastebinit++
<jason_> SpamapS, http://paste.ubuntu.com/705582/
<SpamapS> jason_: mgmt_classes : ['orchestra-juju-available', 'orchestra-juju-acquired']
<SpamapS> jason_: it should only be in 'orchestra-juju-available'
<SpamapS> jason_: take the other one out
<jason_> SpamapS, ok I'll move that back
<SpamapS> jason_: Other than that, it should work as expected
<SpamapS> jason_: since its already installing, its *possible* that it will come up fine.
<SpamapS> jason_: actually it most definitely should come up fine
<jason_> SpamapS, ok, I've got that class straight in here. Install is wrapping up -- another thing, my system currently is on a network where it can only talk to the orchestra server
<jason_> I can add another nic -- or would it be better to switch that nic to the "public" network w/in my network
<SpamapS> jason_: when the box tries to install juju, things may go wrong then. Depends on if your orchestra server can get to the internet.
<SpamapS> jason_: the default profile uses the orchestra server's squid proxy to get to the net
<jason_> SpamapS, the orchestra server can for sure
<jason_> ah
<SpamapS> jason_: so things *should* still work
<jason_> SpamapS, ok, it's up -- I'm going to try the deploy
<SpamapS> I've done fully disconnected installs with a local mirror of Ubuntu, so I know it works.
<SpamapS> jason_: the problem is, deploy now wants *another* system
<jason_> SpamapS, ah, ok, so if I mint another, then that ought to work
<SpamapS> jason_: right. Currently there's a 1:1 relationship between deployed units of a service and machines.
<SpamapS> jason_: so bootstrap runs on the first machine, and then each deploy/add-unit after that allocates another machine
<jason_> SpamapS, ah, so wordpress takes 3 servers
<SpamapS> jason_: at the moment, yes.
<jason_> got it
<SpamapS> jason_: bug 806241 will hopefully be done soon. :)
<_mup_> Bug #806241: It should be possible to deploy multiple units  to a machine (service colocation) <production> <juju:Confirmed> < https://launchpad.net/bugs/806241 >
<_mup_> juju/config-get r392 committed by kapil.thangavelu@canonical.com
<_mup_> config get subcommand to retrieve current settings or service schema
<SpamapS> jason_: https://bugs.launchpad.net/juju/+bugs?field.tag=production there's a list of issues deemed necessary for making juju useful in production
<jason_> SpamapS, cool, thanks. I'm spinning up a couple new vms. It looks like my first vm is complaining about some things -- tough to see the errors as they come up though -- bazaar has encountered an internal error is part of it
<SpamapS> jason_: did you use the environments.yaml from the wiki directly?
<SpamapS> jason_: juju-origin: lp:juju/pkgs isn't going to work
<SpamapS> jason_: suggest removing that line.
<SpamapS> jason_: also are you using juju from the PPA, or 11.10 ?
 * SpamapS realizes its 1:30pm and goes to eat
<SpamapS> jason_: bbiab
<jason_> SpamapS, ok -- juju from bazaar
<jason_> SpamapS, I'll change that enviro bit
<niemeyer> Good progress today, and in a good break point.. I'll step out and do something outside.. back later.
<SpamapS> jason_: So you are running juju by checking out lp:juju ?
<xerxas> Hi all
<xerxas> I have some wonderings ...
<xerxas> I'm hesitating beween juju, cloudformation , and { chef, puppet }Â 
<xerxas> these tools are somewhat complementary
<xerxas> but part of the scope is the same
<xerxas> is there someone using juju with cloudformation ?
<SpamapS> xerxas: If I needed to put up critical production systems tomorrow, I'd go with chef or puppet... knowing that I can convert all of my chef/puppet to charms once juju is "production ready"
<SpamapS> xerxas: I don't think Juju and cloudformation would be compatible together.
<xerxas> SpamapS:  ok, so for iaas using cloudformation , for application management and it's configuration files,  puppet or chef ?
<xerxas> SpamapS: ahh , interesting "Juju and cloudformation would be compatible together" , how come ?
<xerxas> juju creates some instances , this is why it's not compatible ?
<xerxas> so cloudformation is kind of "very static" ?
<SpamapS> xerxas: they both use cloud-init to seed themselves into the instance
<xerxas> SpamapS: I'm not forced to use cloud-init with cloudformation , do I ?
<SpamapS> xerxas: I personally wouldn't use cloudformation since its likely to never be available on any other IaaS provider
<xerxas> SpamapS:  right, this is why I'm searching something else
<SpamapS> xerxas: cloudformation uses cloud-init to make the instance do what it wants. So does Juju.
<xerxas> but puppet , chef , doesn't boostrap infrastructures, and creates ressource (chef knows how to create ec2 instances with knife, but no more , and knife is only client side)
<xerxas> SpamapS: what about elasticIPs, autoscale, securitygroups ?
<SpamapS> xerxas: these bugs are all known problems that we think would be issues for using juju in production: https://bugs.launchpad.net/juju/+bugs?field.tag=production
<xerxas> I mean , with cloud formation , I can go up to "create a whole infrastructure for my application in my continuous integration"
<SpamapS> xerxas: if you are willing to a) work around them, or b) help fix them, then juju would be a good choice for you today. :)
<xerxas> ;)
<SpamapS> xerxas: right, thats exactly what we want to do with juju.. and you can do it right now.. but you will be accepting some risk
<xerxas> ok, I'm pretty much ok to accepting risk ;)
<xerxas> I want to be on the bleeding edge
<SpamapS> Ruby dev? ;-)
<xerxas> but have no much time to contribute to juju
<SpamapS> xerxas: thats ok, these will absolutely be solved by the 12.04 release of Ubuntu
<xerxas> no , system administrator ;) (using python as much ruby ;) )
<SpamapS> xerxas: have you played with juju yet?
<xerxas> yes
<SpamapS> xerxas: how far did you get?
<xerxas> 2 years administrating 60 servers with puppet on ec2 (from 0 to 60 servers, bootstrapped al the infrastrcutre) , then used chef for 1 year , then now, testing juju and testing cloudformation
<xerxas> SpamapS:  I could deploy charms ;)
<jason_> SpamapS, I'm running juju  w/ lp:juju
<SpamapS> jason_: any reason you're not using the PPA or the one from 11.10 ?
<jason_> SpamapS, I was using the one from 11.10, I actually had an issue with that where the version I had appeared to mismatch with what the examples needed
<SpamapS> xerxas: it would be *really* helpful to have some bleeding edge ops feedback with juju, so if you're willing to be patient with us, WELCOME! :)
<jason_> SpamapS, then with this howto, it suggested running from lp, so I did that
<SpamapS> jason_: which examples were you reading from? r398 was just uploaded to 11.10 last night, and brings it up to date with most of the upstream docs.
<jason_> SpamapS, from the /usr/share/docs
<jason_> SpamapS, this was fri
<SpamapS> jason_: yeah, the one in 11.10 now is going to be less likely to change out from under your feet. :)
<jason_> SpamapS, or maybe thurs
<jason_> SpamapS, I'll install that now
<SpamapS> jason_: that should also eliminate your bzr branching problem since it will automatically choose 'distro' as your source, and that will let you use the squid proxy in your orchestra server
<jason_> SpamapS, how do I see what tasks the cobbler server is sending out, and clear those -- all three of my systems came up and tried those broken bzr instructions
<jason_> cool
<SpamapS> jason_: juju destroy-environment first
<SpamapS> jason_: that will clear everything out of the webdav server and should reset all the cobbler system records
<theDUBBER> www.thedubber.altervista.org
<jason_> SpamapS, sweet
<xerxas> SpamapS: I would like to help and give feedback , I'm just evaluating the solution I'll use ...
<xerxas> so for , juju seems an intermediate between cloudformation and puppet
<xerxas> juju plays on the infrastructure side and application side , this seems interesting to me
<jason_> SpamapS, so I bootstrapped again, and it's a matter of waiting for my systems to poll for instructions?
<jason_> or do they need to restart
<SpamapS> jason_: since you don't have power control defined, you have to manually reboot them
<jason_> cool
<SpamapS> jason_: if you had a PDU of some kind that cobbler can talk to, it would have powered them off/on
<xerxas> SpamapS: anyway, thanks for your answer
<xerxas> +s
<xerxas> still wondering how to use , and how ... ;)
<SpamapS> xerxas: we're here if you have questions. :)
<SpamapS> hmm.. seems in the run up to 11.10 we have introduced some python 2.7-isms
<SpamapS> failing tests on lucid. :-P
<hazmat> SpamapS, log?
<SpamapS> still running
<SpamapS> but at least 5 thus far
<SpamapS> A lot of them seem centered around checking for "too many args"
<hazmat> SpamapS, got it
 * SpamapS vows to get jenkins setup with some LXC slaves soon.
<hazmat> SpamapS, are you referencing .. https://code.launchpad.net/~clint-fewbar/+recipe/juju-daily-test
<hazmat> i see all kinds of odd things there
<hazmat> could not init jvm.. etc
<SpamapS> hazmat: that one doesn't even build yet on lucid because dh_python2 is missing
<SpamapS> which is exactly what I'm working on right now
<hazmat> ah
<SpamapS> FAILED (skips=7, failures=5, errors=3, successes=1549)
<SpamapS> will pastebin the log..
<SpamapS> http://paste.ubuntu.com/705630/
<SpamapS> hazmat: is this problems with argparse?
<SpamapS> twisted.trial.unittest.FailTest: 'juju: error: unrecognized arguments: fum' not in 'usage: juju unexpose [-h] [--environment ENVIRONMENT] service_name\njuju unexpose: error: unrecognized arguments: fum\n'
<hazmat> SpamapS, odd those tests have been going for a while
<hazmat> SpamapS, the test is being strict about checking error output
<hazmat> it looks like a variance to the output
<hazmat> s/juju: error/juju unexpose
<hazmat> pretty minor
<SpamapS> yeah its all minor stuff
<hazmat> we could just be lest exact about and capture from 'error:'
<hazmat> compare that is
<SpamapS> so yeah just drop the preceding command..
<SpamapS> looks like an argparse difference that doesn't matter
<jimbaker> SpamapS, that's correct, we had to update some of the command tests when we moved to 2.7 because of this
<SpamapS> jimbaker: so was there a definite decision to drop 2.6 support?
<jimbaker> SpamapS, i do not believe so. in particular, we would expect these commands to be executed on client running 2.6 like os x
<jimbaker> however, it has been the case for these tests since probably before budapest. i suppose we could use the python version to determine the error text, or relax it
<SpamapS> Lion has 2.7
<jason_> SpamapS, I'm still getting bzr errors when my systems come up -- also, I am right that these need to keep reinstalling in order to get new commands from orchestra?
<SpamapS> jason_: no, the install is just the way that you get a known-clean environment for juju to work with.
<cole> wut's up charmers!
<SpamapS> jason_: the idea is once the agent starts, you don't have to reinstall anymore. :)
<jason_> SpamapS, I commented that lp:juju/pkg line out, but maybe that wasn't sufficient...
<jason_> My systems all reinstalled, came up, and failed on that
<hazmat> hola cole, tis release time
<cole> hazmat: nice!
<SpamapS> jason_: can you pastebin /var/log/cloud-init-output.log ?
<jimbaker> SpamapS, i would be more concerned about ensuring the test suite, perhaps just a subset, runs successfully on os x
<SpamapS> jimbaker: indeed, would be good to have an OS X VM somewhere running as a jenkins slave
<jason_> SpamapS, http://paste.ubuntu.com/705647/
<cole> so this topic came out of the openstack meet up around donate: thoughts? https://blueprints.launchpad.net/juju/+spec/dynamic-juju
<SpamapS> jason_: hm, that was less helpful than I thought it would be
<SpamapS> jason_: perhaps use 'juju-origin: distro' in your environments.yaml
<SpamapS> cole: as I've said before.. you can write that now without changing anything in juju and just have it run add-unit/remove-unit based on any number of metrics coming from any number of metric gathering services.
<SpamapS> Basically I see no roadblocks to just doing that in charms.
<jason_> SpamapS, ok, I'm making that change
<SpamapS> jason_: I'm updating that wiki page too.. its woefully off
<cole> SpamapS: fantastic, i've not actually heard that said.  Last conversation I had was around M-Collective and Gustavo said that it was a roadmap item.
<SpamapS> cole: well we can always improve it.
<SpamapS> cole: but really, how hard would it be to just have a charm that runs some kind of data collector and applies rules to the collected data
<SpamapS> This isn't exactly a new idea. :)
<SpamapS> juju just makes it a lot more flexible
<cole> SpamapS: agreed, I should have been more specific.  Basically nebula wants to help with getting information out of ganglia and nagios to automatically do the scaling, or if the direction is agent basedâ¦so be it.
<SpamapS> ganglia is agent based. :)
<SpamapS> and nagios can be
<cole> dedicated agent ;P
<cole> message here being, less daemons the better
<SpamapS> cole: yeah I'd say JFDI and if juju is getting in your way, thats the time to look at adding stuff to juju.
<SpamapS> I do think that juju will have a rich plugin arch at some point, and that will be the place where this lives.
<SpamapS> But I don't have nearly as much influence as niemeyer. :)
<SpamapS> cole: there are some who would say more daemons means more separation of concerns. :)
<SpamapS> which should lead to more robust systems
<SpamapS> probably most important to make sure only *one* daemon does collection than to try and make one daemon to rule them all
<SpamapS> otherwise I'd say you should look at adding this to upstart :)
<SpamapS> cole: there is one bug that you'll probably need fixed before this becomes easy..
<SpamapS> cole: bug 806241 will allow multiple charms in a single machine/container
<_mup_> Bug #806241: It should be possible to deploy multiple units  to a machine (service colocation) <production> <juju:Confirmed> < https://launchpad.net/bugs/806241 >
<SpamapS> cole: that would be needed to make sure a single collection service was deployed everywhere.
<hazmat>  cole first step would be to get an api endpoint onto juju
<SpamapS> hazmat: bah, cmdline for the first run. ;)
<SpamapS> API does seem like something that needs to happen *soon* though
<hazmat> SpamapS, actually the command line would switch to using the ui
<hazmat> s/ui/api
<hazmat> should make the cli a bit faster
<cole> i like it!
<jason_> SpamapS, another thing from my env.yaml -- I have default-series: oneiric-juju -- I think I added that when it complained about a default series
<jason_> does that look ok?
<SpamapS> jason_: you need it to be    default-series: oneiric
<SpamapS> jason_: that has nothing to do with the cobbler profile
<hazmat> jason_, its more like what release/version of ubuntu do you want to use
<jason_> yes, makes sense
<niemeyer> jason_: Hmm.. how did it complain about the default series?
<niemeyer> jason_: The default value for this should actually work
<jason_> My systems have started not pxe booting -- does it seem like that's because its what orchestra intends, or some other problem
<hazmat> niemeyer, that value is never validated
<niemeyer> hazmat: I mean that the value shouldn't have to be changed
<jason_> niemeyer, there was no value in the sample I started with, as I recall, juju complained about it when I was trying to bootstrap
<hazmat> it does have to be set for osx i believe
<hazmat> since it can't be inferred
<niemeyer> hazmat: Ahh, ok, ECONTEXT, sorry
<niemeyer> hazmat: Hmm.. even though, I'm pretty sure it's part of the default config
 * niemeyer checks
<niemeyer> It is indeed
<niemeyer> hazmat: It shouldn't complain anyway
<niemeyer> So, more store..
<jason_> SpamapS, when I pastebinned my cloud-init-output log earlier, it was the wrong log, here it is: http://paste.ubuntu.com/705669/
<jason_> that's from one that just ran
<SpamapS> jason_: oh that looks like a bug in the etckeeper package
<niemeyer> SpamapS: OMG.. the never-going-away UnicodeDecodeError..
<jason_> SpamapS, should that not be affecting juju?
<niemeyer> SpamapS: Isn't it bzr itself, actually?
<niemeyer> jason_: It's breaking the installation of packages
<niemeyer> jason_: Do you have accents in your current pwd
<niemeyer> ?
<niemeyer> Hmm.. no, it's not the current pwd
<niemeyer> The path isn't clear from the traceback
<jason_> niemeyer, no
<jason_> it'
<jason_> it's ubuntu -- the default
<jason_> oh
<jason_> yeah, I get you
<jason_> no
<niemeyer> jason_: Something like this is whats' going on there:
<niemeyer> >>> u"Ã©" < "Ã©"
<niemeyer> Traceback (most recent call last):
<niemeyer>   File "<stdin>", line 1, in <module>
<niemeyer> UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 0: ordinal not in range(128)
<SpamapS> Its etckeeper
<SpamapS> trying to bzr add /etc
<SpamapS> jason_: you can remove 'ubuntu-orchestra-client' from the kickstart/preseed for oneiric-i386-juju .. none of that stuff is needed
<jason_> SpamapS, will do
<niemeyer> jason_: Are you consciously setting the encoding to 'ANSI_X3.4-1968'?
<SpamapS> we had similar problems with orchestra+juju last week in Boston
<jason_> niemeyer, no
<niemeyer> jason_: Ok.. it's likely a default from ascii somewhere then
<SpamapS> jason_: I think those errors are actually ok and not breaking your install
<SpamapS> Its something broken with the way cloud-init runs on /dev/console
<SpamapS> need etckeeper to be fully seeded or it stops and asks for stuff
<jason_> SpamapS, there's a $SNIPPET('orchestra_client_package') in /var/lib/cobbler/kickstarts/juju.preseed -- is that the line to remove?
<SpamapS> jason_: I think so yes
<SpamapS> jason_: still I think you may be up and running, did you try a 'juju status' ?
<SpamapS> jason_: you may also see a debconf prompt on tty1
<SpamapS> jason_: if so you have to stop the getty and press enter through that
<jason_> SpamapS, no, but I'm in the process of reinstalling on all three right now
<SpamapS> if you see it, let me know, I'll report the bug
 * SpamapS 's brain is fuzzy from all the churn last week
<jason_> SpamapS, it takes forever to keep doing that, but it seems like the only way to get them to try again, maybe I'm wrong there
<SpamapS> jason_: once you get zookeeper up and running, you shouldn't have to repeat the install
<jason_> SpamapS, ok, juju status was interesting, complaining that my systems aren't reachable from my client -- they're on a network only with the server, so that's something
<SpamapS> jason_: yeah you have to be able to reach them by ssh
<SpamapS> jason_: simplest thing to do is to run the client from the orchestra server
<jason_> SpamapS, so once bootsrap completes, zookeeper is up?
<SpamapS> jason_: no, the other way around
<SpamapS> jason_: bootstrap returns as soon as it has told cobbler to boot the machine in a bootstrap configuration..
<jason_> got it
<SpamapS> jason_: then you have to basically poll the machine to see if zookeeper is up and running
<SpamapS> jason_: hopefully once you have a running environment, you don't have to do bootstrap anymore.
<jason_> SpamapS, is that with juju status?
<SpamapS> jason_: thats the simplest way yes
<jason_> SpamapS, cool -- about orchestra and monitoring, is there a separate nagios web interface?
<SpamapS> jason_: I believe nagios ends up running on the orchestra-monitoring-server .. which is not necessarily the same box as orchestra-provisioning-server.
<SpamapS> jason_: but I'm no Orchestra expert. :-P
<SpamapS> jason_: #ubuntu-server has a few people who are, and also the mailing list will get a lot of answers. Docs are still pretty hard to come by as we're still in tech-preview "best effort" mode.
<jason_> SpamapS, ok -- yeah
#juju 2011-10-11
<jason_> SpamapS, I'm getting an invalid ssh key now -- I did juju status, and it told me the key had been changed (from my many reinstalls no doubt) and asked to accept or no -- I said no, meaning to cancel out and delete the known hosts file and retry, and now it's Invalid SSH key each time
<jason_> ok, I copied all my .ssh files from the client I'd been working on to the server -- seems to have gotten me past that bit
<jason_> juju status completed -- looks like my first system is in place
<hazmat> jason_, woot!
<_mup_> juju/go-store r18 committed by gustavo@niemeyer.net
<_mup_> Implemented URL.WithRevision.
<jason_> hazmat, mysql deploy success, too...
<niemeyer> jason_: ho ho
<_mup_> juju/go-store r19 committed by gustavo@niemeyer.net
<_mup_> New store package with AddCharm and OpenCharm interface.
<_mup_> The interface to the package is trivial, but internally it actually
<_mup_> handles all the necessary logic for concurrent runs of the algorithm,
<_mup_> including mongo-based atomic locks with expiration, multi-URL synchronous
<_mup_> revision bumping as described in the charm specification, GridFS-based
<_mup_> memory-friendly uploading for large files, and ponies too.
<_mup_> Lacks documentation and sha256 handling, though.. but I need some sleep.
<niemeyer> Night all
<_mup_> juju/expose-retry r402 committed by jim.baker@canonical.com
<_mup_> Support retrying port mgmt ops in periodic machine check
<_mup_> Bug #872164 was filed: [Oneiric] Cannot deply services - store.juju.ubuntu.com not found <juju:New> < https://launchpad.net/bugs/872164 >
<jamespage> morning - I took the liberty of pointing the bug reporter for bug 872164 in the right direction and marked the bug as invalid
<_mup_> Bug #872164: [Oneiric] Cannot deply services - store.juju.ubuntu.com not found <juju:Invalid> < https://launchpad.net/bugs/872164 >
<fwereade_> thanks jamespage, I just saw, much better response than mine
<jamespage> fwereade_, np
<jamespage> I think I must be missing something: should the stop hook be called when a unit is removed from a service using remove-unit?
<rog> where can i find documentation for txaws?
<rog> oops, LMGTFY
<hazmat> good morning
<hazmat> fwereade_, the docs still look out of date.. https://juju.ubuntu.com/docs/user-tutorial.html#deploying-service-units
<hazmat> i think jimbaker mentioned yesterday they weren't regenerating
<hazmat> jamespage, on bug 871966 when you say local juju environment you mean a local provider?
<_mup_> Bug #871966: FQDN written to /etc/hosts causes problems for clustering systems <cloud-init (Ubuntu):Confirmed> <cassandra (juju Charms Collection):New> < https://launchpad.net/bugs/871966 >
<hazmat> jamespage, the stop hook is not called
<hazmat> jamespage, pretty much everything that deals with remove/destroy works one level up from the supervisor of the thing being killed
<hazmat> with the notion that even if the thing is AWOL, the action will happen
<rog> hazmat: hiya
<hazmat> rog, txaws is pretty much UTSL for most questions imo
<rog> hazmat: yeah, i discovered that. thanks.
<rog> foundations of sand :-)
<hazmat> rog, not really.. its well tested. but yeah.. its a consequence of using twisted, vs using the standard python library for aws (boto )
<rog> uh huh
<hazmat> hmm.. interesting
<jamespage> hazmat: the comment on bug 871966 does refer to the local provider - but that provides an IP address for private-address anyway
<_mup_> Bug #871966: FQDN written to /etc/hosts causes problems for clustering systems <cloud-init (Ubuntu):Confirmed> <cassandra (juju Charms Collection):New> < https://launchpad.net/bugs/871966 >
<hazmat> jamespage, yup and private-address==public-address there
<hazmat> and it shows up in juju status
<jamespage> hazmat: I now have something that works with the local provider, and on ec2 and openstack
<hazmat> jamespage, nice
<hazmat> jamespage, comments about the local provider probably aren't relevant on a cloud-init bug, since the local provider doesn't use cloud-init.. fwiw
<jamespage> hazmat: they more referred to the fix for cassandra
<hazmat> ah.. ic. its linked
<jamespage> yep
<jamespage> hazmat: with regards to units leaving a service/not calling stop I was trying to figure out the best way to remove a node from a cassandra cluster
<jamespage> because the node does not get shutdown, it remains in the ring
<robbiew> rog: ping
<rog> robbiew: pong
<robbiew> rog: have you registered for UDS?
<rog> robbiew: i think so.... but i'll just check
<rog> robbiew: yes, i have
<robbiew> rog: -> http://uds.ubuntu.com/register/ :)
<rog> robbiew: i did it on 15th Sep...
<rog> and flights all booked too
<robbiew> rog: hmm, okay.  I'll talk to our admins then, thx
<rog> robbiew: at any rate, i've got a confirmatiom email from marianna
<rog> robbiew: i'll just check the web site directly
<robbiew> rog: ah, cool
<robbiew> nevermind then
<robbiew> :)
<rog> robbiew: ah, maybe i didn't register on the linaro web site. i think i only did the UDS registration.
<hazmat> jamespage, hmm
<hazmat> jamespage, yeah.. i guess we really should be calling stop on units
<jamespage> hazmat: I need to deal with two scenarios - one where its a controlled removal
<jamespage> and one where the node goes AWOL
<hazmat> jamespage, pls file a bug
<hazmat> i can look at that today
<jamespage> hazmat: ack - doing now
<hazmat> for stopping a machine its almost irrelevant, since we shutdown the machine, but for a unit if we don't call stop, there isn't any thing to keep it from continuing to run
<hazmat> at least till all units are containers
<hazmat> and then the container is killed
<robbiew> rog: UDS is all you need ;0
<robbiew> ;)
<rog> robbiew: ok, i'll ignore the FAQ then...
<hazmat> but we really can't do the latter on ec2, till we figure out some magical networking solution, or stop doing dynamic port management
<hazmat> unless we assume a single unit per machine in ec2 and do a targeted forward rule per exposed port
<_mup_> Bug #872264 was filed: stop hook does not fire when units removed from service <juju:New> < https://launchpad.net/bugs/872264 >
<jamespage> hazmat: ^^
<jamespage> I tried to document the two challenges I have specifically with the cassandra charm
<hazmat> jamespage, thanks
<jamespage> I guess they may apply to other charms that have similar ring storage methods
<hazmat> jamespage, so on 2) and 1) the other units should both detect the removal
<jamespage> hazmat: yes - they do
<rog> just realised that "canonical/linaro employee" means "(canonical AND linaro) employee" not "(canonical OR linaro) employee"...
<rog> doh
<jamespage> hazmat: and I could use the hook on the remaining nodes to deal with both situations
<jamespage> I would need to write it such that only one node completes the action
 * jamespage thinks about that one
 * SpamapS awakens.. far too early
<niemeyer> Good morning all
<rog> niemeyer: yo!
<SpamapS> jamespage: I think there's another bug asking for similar functionality..
<SpamapS> jamespage: bug 862422
<_mup_> Bug #862422: Provide a way for services to protect units during dangerous operations <juju:Confirmed> < https://launchpad.net/bugs/862422 >
<SpamapS> jamespage: swift is a similar ring service and has times where adding or removing is a bad idea
<jamespage> SpamapS, agreed - it looks very similar
<SpamapS> Does seem like the stop hook should handle this
<jamespage> SpamapS: it would do for controlled removal
<SpamapS> jamespage: not sure I understand the AWOL case
<jamespage> SpamapS, thats more of a housekeeping case
<jamespage> in cassandra if you never moved entries for nodes that had gone away ('Down' status) it gets very crufty
<jamespage> also you want to ensure that loadbalancing etc.. get re-adjusted as the node won't be coming back
<hazmat> jamespage, but don't you get a departed event at all other nodes when one goes AWOL?
<jamespage> SpamapS, yes
<jamespage> sorry - I mean hazmat
 * hazmat checks the bug report
<SpamapS> jamespage: yeah that should be detected in the peer relations
<SpamapS> cassandra has a prescribed procedure for removing a dead node from the ring
<rog> niemeyer: i'm porting the ec2 launch code and i'm not sure how goamz's AuthorizeSecurityGroup is supposed to work the way it's being used in the python code. here's a comparison: http://paste.ubuntu.com/706060/
<jamespage> SpamapS, it does
<SpamapS> so on departed.. you would run that procedure for the departed unit
<hazmat> jamespage, so in the case of 1) the desire is for the actual termination of the unit to hang till the stop (which is potentially a long running op) completes?
<hazmat> and of course to execute stop as part of 1
<jamespage> hazmat: ideally yes
<jamespage> SpamapS: what information is provided when the -departed hook fires about the remote service unit?
<hazmat> jamespage, doesn't the same problem exist in reverse when adding units.. as i recall for cassandra (might be outdated), your supposed to only add a single unit at a time
<niemeyer> rog: Looks like there's a protocol setting missing
<hazmat> jamespage, just the unit name and that it departed
<niemeyer> rog: Check out the docs and the implementation
<SpamapS> hazmat: +1 for that, let stop be proactive about locally stored data
<hazmat> SpamapS, niemeyer g'morning
<rog> niemeyer: the python code doesn't seem to set a proto - i was just checking that it wasn't an obvious bug
 * hazmat just up the ante on his war against rodents, bring in the exterminator
 * SpamapS wishes the time would change, its pitch black here in LA at 6:30am :-P
<niemeyer> rog: Maybe it has a default?
<SpamapS> we're porting the ec2 launch code?
<jamespage> hazmat, there is a restriction on adding units - N+N rather than N+1
<rog> niemeyer: it seems to have two distinct modes of operation
<rog> there's no obvious default in the python code
<rog> i'll recheck though
<niemeyer> rog: They're both backed by the same implementation
<niemeyer> rog: The same API
<niemeyer> rog: If one of them is failing, the call is different.. just figure how it's different and you'll understand the problem
<SpamapS> hazmat: bug 862422 has a case where swift requires that nodes wait to be added until rebalance is done
<_mup_> Bug #862422: Provide a way for services to protect units during dangerous operations <juju:Confirmed> < https://launchpad.net/bugs/862422 >
<jamespage> SpamapS, hazmat: Cassandra has a similar requirement
<hazmat> hmm
<SpamapS> Its not that hard on the add-unit case though
<SpamapS> you can error out the joined event
<hazmat> they can't really scan for a rebalance attribute since its being set by the same hook that's doing it
<hazmat> and the hook values are only flushed at the end of the hook
<SpamapS> and admins will just have to resolve --retry
<SpamapS> hazmat: the services should protect themselves
<SpamapS> hazmat: there's somewhere that an admin has to look to see if a re-balance is going on
<SpamapS> thats where the hook should look
<hazmat> SpamapS, there isn't any service level logic.. atm.. its got to be what the units can coordinate among themselves
<jamespage> so - just to flip back to my -departed thinking
<jamespage> ATM I will need to a) detect which node needs to be removed from the ring
<SpamapS> hazmat: yeah, I don't think preventing it is juju's problems. Handling failures gracefully should be all it needs to do.
<jamespage> and b) elect which of the remaining units is going to execute the removal
<jamespage> in the -departed hook
<SpamapS> Though this does go back to the --wait argument where as an admin I'd like to get feedback from the command's intended actions.
<hazmat> jamespage, so a leader election/detection cli api for hooks
<jcastro> Does anyone want to volunteer to do a juju session for ubuntu openweek? https://wiki.ubuntu.com/UbuntuOpenWeek
<rog> niemeyer: hmm, it looks like the python code is using an undocumented feature of aws.
<jamespage> hazmat, that would be nice
<jamespage> as it would prevent some fragile hack in the charm hook
<hazmat> rog, that api has several different spellings, they are documented
<jamespage> I'm doing something similar at the moment for unit bootstrapping - which it not 100% reliable
<jamespage> when units join the peer relation
<SpamapS> jcastro: I'm down for it.
<jcastro> SpamapS: can you claim a block please?
<jcastro> SpamapS: I'll do it with you if you want
<SpamapS> Yeah at least be there to help me with the bot. ;)
<hazmat> rog, txaws is a poor reference impl to look at.. https://github.com/boto/boto/blob/master/boto/ec2/connection.py#L1917
<hazmat> is much better at api coverage and docs, notice right above that impl there is support for a deprecated mechanism with slightly different spelling
<lynxman> hazmat: SpamapS: got the juju macports done and working, just a versioning question, let me paste here the versions of the python packages I'm using and let me know which ones would you deem as "need upgrading"
<rog> hazmat: the name "SourceSecurityGroupName" is used as a parameter. i'd have thought that should be documented in http://docs.amazonwebservices.com/AWSEC2/latest/APIReference/index.html?ApiReference-query-AuthorizeSecurityGroupIngress.html
<rog> given that seems to be the entry point.
<lynxman> argparse (1.2.1), zookeeper (3.3.0), python-regex (0.8.0), python-txaws (0.2), pydot (1.0.25), python-argparse (1.2.1)
<lynxman> hazmat: maybe we should upgrade txaws?
<rog> niemeyer: looks like a new entry point is warranted. perhaps the original call would be better named AuthorizeSecurityGroupIP. hmm.
<hazmat> rog its quite possible txaws is not targeting the latestt api
<hazmat> rog, actually highly likely given its lack of dev
<rog> hazmat: txaws has the call. as does boto. but the AWS documentation doesn't mention that variant AFAICS
<rog> it looks like all the language APIs have that variant. do you know what it's actually doing? authorizing one group with the privileges of another?
<rog> that would be my guess, but it would be nice to know for sure, so that i can choose a good name.
<hazmat> rog, aws supports both because they have a versioned api, boto has separate implementations for each version one marked deprecated.
<hazmat> rog, it is documented, but not under the latest version of the api docs which document the latest
<jcastro> SpamapS: which slot do you want?
<lynxman> hazmat: so what do you reckon :)
<rog> hazmat: ah, so... we have to ask: what's the equivalent of that old call in the new API?
<rog> i'll try and find the old docs
<hazmat> lynxman, so txaws doesn't have a release with the openstack fixes atm
<hazmat> and i should probably push out a new version of txzookeeper
<hazmat> lynxman, give me a moment, i'll cut releases for both
<lynxman> hazmat: cool :)
<hazmat> lynxman, besides that.. what's python-regex?
<hazmat> lynxman, we use the builtin re module not a third party lib
<hazmat> unless a dep needs it like pydot..
<hazmat> rog, it should be pretty clear from context how to translate
<lynxman> hazmat: I can drop it as a dependency then, pydot has its own :)
<rog> hazmat: perhaps. this page talks about a "user/group pair permission", but perhaps that's just code for "allow all IP access". http://docs.amazonwebservices.com/AmazonEC2/dg/2007-01-03/ApiReference-Query-AuthorizeSecurityGroupIngress.html
<hazmat> lynxman, so python-txzookeeper 0.8.0 is needed as well
<hazmat> lynxman, and zookeeper 3.3.3 .. there are definitely bug fixes in the py bindings we need
<lynxman> hazmat: alright, I'll upgrade both then, ty
<hazmat> lynxman, np.. the latest pypi release for txzookeper looks good, off to push out a 0.2.1 txaws release
<lynxman> hazmat: lovely, thanks! :D
<rog> tcp port numbers are 16 bit even with IPv6, right?
 * niemeyer looks at rog with the eye
<rog> ok, ok, i should know that.
<SpamapS> jcastro: sorry, family stuff, I'll grab one in the next 2 hrs
<rog> niemeyer: just checking: have you already written some Go code to parse environments.yaml?
<niemeyer> rog: No, that was the first bit I suggested you could start with
<rog> ok, cool
<rog> (BTW the instance starting and group set up code is all working now)
<niemeyer> rog: Please follow the existing convention in the charm package
<niemeyer> rog: Wow, neat!
<niemeyer> rog: How're you testing it?
<rog> niemeyer: it's just a stub file currently, no tests written so far
<niemeyer> rog: Heh
<niemeyer> rog: So there's nothing..
<rog> niemeyer: just running it and going to the aws console to check
<niemeyer> rog: :)
<niemeyer> rog: Please write tests with the logic, rather than retrofitting them
<niemeyer> rog: We should follow a similar model to what was done with goamz itself
<niemeyer> rog: Rather than the mocking craziness we have in the Python side
<rog> niemeyer: yes, tests are the next thing i'm putting in. the code isn't even in a package yet.
<niemeyer> rog: Ok, it's a spike then
<rog> niemeyer: a spike?
<niemeyer> rog: yeah, a  temporary hack to get a feeling of the problem
<rog> niemeyer: yeah, although i've ported a lot of the logic from the original python, so it should be trivial to do it right.
<rog> niemeyer: this is all i've got so far: http://paste.ubuntu.com/706139/
<niemeyer> rog: Nice
<hazmat> lynxman, latest txaws release @ http://launchpad.net/txaws/trunk/0.2/+download/txAWS-0.2.1.tar.gz
<lynxman> hazmat: lovely, thanks :)
<rog> niemeyer: what's the best approach to testing with ec2? actually interact with ec2 directly?
<niemeyer> rog: No, we can follow a similar model from goamz
<rog> ok, i'll have a look.
<rog> niemeyer: BTW is this the only spec for the environment yaml? https://juju.ubuntu.com/docs/getting-started.html#configuring-your-environment
<niemeyer> rog: Please read the Python code
<rog> ok
<lynxman> hazmat: new ports submitted, contacted one of the maintainers and it's *possible* that juju will be in the archive by next week
<hazmat`> lynxman, sweet!
<SpamapS> lynxman: is there an artifact somewhere where I can test and provide positive feedback to the maintainers?
<lynxman> SpamapS: I can send you my portindex branch if you want
<jimbaker> SpamapS, this branch should hopefully fix the problem you saw on openstack with expose failing: lp:~jimbaker/juju/expose-retry
<SpamapS> Hah, I love this code
<SpamapS> self.mocker.call(simulate_random_failure)
<SpamapS> :)
<SpamapS> jimbaker: indeed that should retry those ops. There are many others.. I think we just have to get defensive about txaws
 * hazmat lunches
<niemeyer> I'm off to lunch too.
<jimbaker> SpamapS, :). we need to be defensive about txaws because it needs work and it necessarily deals with bad stuff. in general, txaws will fail early, if it has a bad payload it can't parse
<jimbaker> for commands like destroy-environment that can be repeated, this may be ok. for agents, we need to do retries
<jimbaker> i'm pretty certain that the provisioning agent retry mechanism (ignoring that it's a SPOF for now) seems to robust, so long as we have errbacks defined such that stuff doesn't just stop. in the case of expose, the only place where txaws can be called is that one method (open_close_ports_on_machine), so trapping there and then using the existing resync mechanism for retries would seem to suffice
<SpamapS> Are there any operations that the provisioning agent does w/ txaws where it shouldn't retry on error?
<SpamapS> expose/unexpose was just the most common fail we had
<SpamapS> there were others
<SpamapS> any time listing instances returned empty ... things were likely to just grind to a halt
<jimbaker> SpamapS, i suspect the problem with that is seen here: http://pastebin.ubuntu.com/706206/, specifically lines 17-21
<jimbaker> i need to check that get_machines will always raise a ProviderError if it fails
<jimbaker> SpamapS, no, it only catches EC2Error, but txaws will raise other errors
<SpamapS> jimbaker: yeah seems like we should be able to trust our internal libraries to always raise only ProviderError. :)
<jimbaker> SpamapS, that's definitely not the convention we have
<jimbaker> no catchalls
<SpamapS> seems like catchalls at external libraries would be a good idea, but not for internal ones.
<jimbaker> except perhaps in some twisted code where we use an errback setup, and then that does catch everything
<jimbaker> SpamapS, yeah, i don't know. i think i can defend the existing mechanism by stating that for nonagent code, it's better to failfast, so any unknown errors bubbling up is fine
<jimbaker> SpamapS, but if i look at periodic_machine_check, it does the right thing: it always reschedules itself, even if there's an error (equiv to inlineCallbacks with a finally)
<jimbaker> SpamapS, so it should be resilient. and of course, if txaws is bad here, vs just getting an occasional bad payload, there's nothing that can be done anyway except to repeatedly log the problem
<SpamapS> jimbaker: thats really what I'm wondering.. I don't know of any action the provisioning agent takes that shouldn't just be retried over and over. I will say that we need a better way than debug-log to track provisioning operations.
<jimbaker> SpamapS, i think this would be helpful, bug 769120
<_mup_> Bug #769120: Ensemble status shouldn't report dead units based soley on state, but also on presence. <juju:New> < https://launchpad.net/bugs/769120 >
<hazmat> niemeyer, the doc builds on juju docs have been broken for a while.. their still referencing old ways of deploying
<jimbaker> SpamapS, ok, i think i see one bug here however: watch_machine_changes is a watch, and it calls process_machines. so this watch would stop working if process_machines fails because of some random exception from txaws
<niemeyer> hazmat: Can you please raise that up in #is?
<jimbaker> SpamapS, we would still see the resync from the periodic_machine_check, but the provisioning agent wouldn't respond to changes to ZK as they happen
<SpamapS> jimbaker: exactly!
<jimbaker> SpamapS, cool, glad to see your evidence corresponds to what i'm seeing here :)
<SpamapS> jimbaker: did we ever open an actual bug for this?
<SpamapS> I suppose you can just lpad it :)
<jimbaker> SpamapS, i'll just open it conventionally, since i don't have a branch in place to fix it
<hazmat> niemeyer, done.. is there any one i should ping about it?
<niemeyer> hazmat: Hmm.. #is?  Who did you ping if you're wondering about who to ping?
<hazmat> niemeyer, i just put the message about the problem on #is.. just wondering if i should bring it to a particular person's attention on #is
<niemeyer> hazmat: Ah, gotcha
<niemeyer> hazmat: No, I'd just wait to see if someone there is able to help
<niemeyer> hazmat: Otherwise mail rt
<hazmat> niemeyer, k, thanks
<_mup_> juju/go-store r20 committed by gustavo@niemeyer.net
<_mup_> Introduced revision key tracking so that we can detect whether a
<_mup_> charm update is already the current tip across all requested URLs
<_mup_> or not. If at least one of the URLs are out-of-date, the update
<_mup_> will proceed and bump a revision on all of them.
<rog> i'm off for the day. see y'all tomorrow.
<niemeyer> rog: Cheers!
<_mup_> Bug #872378 was filed: Provisioning agent stops watching machine changes in ZK <juju:New> < https://launchpad.net/bugs/872378 >
<jimbaker> SpamapS, i just filed bug 872378
<_mup_> Bug #872378: Provisioning agent stops watching machine changes in ZK <juju:New> < https://launchpad.net/bugs/872378 >
<SpamapS> jimbaker: thanks, will confirm and mark High
<jimbaker> SpamapS, thanks, just what i was going to ask :)
<SpamapS> oh you did that :)
<jimbaker> i did the high part, you can still confirm it however
<SpamapS> need to raise a txaws bug too
<jimbaker> i'll get the bug dance better next time
<SpamapS> well I am pretty religious about not confirming my own bugs :)
<jimbaker> SpamapS, it's an interesting question about txaws, but given that it's a closely related project, worth seeing their philosophy here - do they handle bad payloads or not?
<SpamapS> no
<SpamapS> the project expects its AWS partner to be well behaved
<SpamapS> so there's also a nova bug to raise
<SpamapS> as nova shouldn't be returning empty ever
<SpamapS> heh.. we should probably have a little triage party to clean up txaws's bug list.
<jimbaker> got it. but regardless we would still expect to see TimeoutError, so there's some class of errors txaws will likely not handle
<SpamapS> 34 new, 72 open, 3 high..
<_mup_> juju/go-store r21 committed by gustavo@niemeyer.net
<_mup_> Track sha256 and store next to the charm information so we can answer
<_mup_> related API requests in the future.
<_mup_> juju/go-store r22 committed by gustavo@niemeyer.net
<_mup_> Copied log.go from personal project (mgo).
<jcastro> lynxman: heya, any update on the macports thing?
<jcastro> hazmat: hey is there an easy way to tell the local provider to use my existing apt cache instead of installing all this apt-cacher-ng business?
<hazmat> jcastro, i think he mentioned updated the portfile, he's going to ping one of the maintainers, with luck soon
<hazmat> jcastro, sadly no
<hazmat> jcastro, is the initial download a problem?
<jcastro> yeah, this close to release the mirrors are hammered, I'll suffer and find something else to do
<m_3> SpamapS: did you mention you had pending MW charm changes?
<SpamapS> m_3: everything I had is in lp:charm/mediawiki
<m_3> SpamapS: cool thanks
<_mup_> juju/go-store r23 committed by gustavo@niemeyer.net
<_mup_> Added info/debug logging across the charm storage operations.
<hazmat> jamespage, ping
<hazmat> jamespage, i'm wondering how problematic it is to always kill the unit's processes on removal instead of a controlled termination via stop
<SpamapS> hazmat: stop needs to be able to *cancel* the removal
<hazmat> SpamapS, there's not much distinguishing a unit removal to a service removal at that level
<SpamapS> It would be awesome if charms could prevent data loss without a --force flag by simply refusing to stop the service while it is vulnerable.
<hazmat> and units overriding the user express commands..
<hazmat> hmm
<SpamapS> is this only happening on destroy-service, not on remove-unit ?
<SpamapS> I do kind of think destroy-* should be more heavy handed
<hazmat> SpamapS, it would happen on either one, the mechanics are the same atm
<hazmat> SpamapS, how does the service know if its redundant or not?
<hazmat> service unit
<_mup_> juju/config-get r393 committed by kapil.thangavelu@canonical.com
<_mup_> juju get for service config/schema inspection
<SpamapS> hazmat: in the case of any clustered service, it will have some way to determine if removing this node is safe or not.
<SpamapS> hazmat: stop would also be a decent place for a single node service to signal some kind of snapshot or backup.
<SpamapS> so blocking until its done would be cool
<hazmat> SpamapS, the converse question is how to prevent problems with problematic charms, that might for example have a broken stop... or even well meaning ones that go out of control
<hazmat> decomissioning a node in cassandra is potentially a fairly long operation afaicr
<hazmat> we'll need intermediary states to properly convey status to a ui
<hazmat> ie. 'stopping'
<hazmat> we only have nouns now.. not verbs
<SpamapS> hazmat: --force ?
<hazmat> sounds reasonable
<SpamapS> hazmat: I see what you mean. Yes it would be cool if we followed upstart's model there and had a goal state, and the in-between states with hooks available for each state.
<hazmat> SpamapS, exactly
<hazmat> hmm.. well maybe not hooks available for each state, but at least the same re status
<hazmat> effectively it would be a hook per verb
<SpamapS> stop/running -> stop/hook-stop-running -> (if hook says so, stop/deferred-stop) -> stop/stopping-unit -> oblivion
<SpamapS> Like if a hook exits 100 , that means it is running the safe stop in the background
<SpamapS> then you can just keep trying to stop it, and getting back 100 until its done decomissioning
<SpamapS> and you can still have a short timeout to deal with misbehaving charms
<hazmat> i'm going to capture this discussion into the bug
#juju 2011-10-14
<DarrenS> testline
<_mup_> Bug #873907 was filed: Security group on EC2 does not open proper port <juju:New> < https://launchpad.net/bugs/873907 >
<shang> https://bugs.launchpad.net/juju/+bug/873907
<_mup_> Bug #873907: Security group on EC2 does not open proper port <juju:New> < https://launchpad.net/bugs/873907 >
<fwereade> shang: I think you need a "juju expose wordpress"
<fwereade> shang: and, I think you're right about the docs, someone mentioned them not getting updated
<fwereade> shang: I should probably try to find out how that's all meant to work :)
<shang> fwereade: so after the add-relation, just run the expose command?
 * shang testing it
<fwereade> shang: that should be it
<fwereade> shang: here's the critical section of the user-tutorial docs, that hasn't landed for somereason: http://paste.ubuntu.com/707807/
<fwereade> shang: (sorry I missed you before)
<shang> fwereade: no worries ;-)
<shang> fwereade: cuz I know Jane did a demo few days ago, so I was pretty sure things should be working, just can't figure out the missing pieces...
<fwereade> shang: sadly I'm not really sure where to start looking for the magical documentation-updater
<fwereade> shang: it might be helpful, just for now, to grab trunk and "cd docs && make html"
<fwereade> shang: but I have a documentation bug to fix *unless someone else has already grabbed it) so once that's fixed I'll be bothering people incessantly about auto-updating
<fwereade> shang: if you confirm it's working for you, I'll mark the bug invalid and add an explanation
<shang> fwereade: the expose command should take care of the security groups part, right?
<fwereade> shang: that's right
<fwereade> shang: still having problems?
 * shang tried the expose, but didn't see EC2 security group open the port
<shang> fwereade: yeah
 * fwereade is perplexed and goes off to peer at the code
<fwereade> shang: did you get a "Service wordpress is exposed" message, but nothing happened?
<shang> fwereade: from the status command, the wordpress open-ports:[]
<shang> fwereade: yeah
<fwereade> shang: hmm; can you ssh to the bootstrap node and see if there's anything in the provisioning agent log?
<shang> fwereade: it looks fine from where I can see it
<shang> http://paste.ubuntu.com/707819/
<fwereade> shang: I remain perplexed :(
<fwereade> shang: let me see if I can repro
<shang> fwereade: ok, and even if I manually open the port 80 on the wordpress security instance, the wordpress has not being configured... let me know if that just me...
<fwereade> shang: hm, that then sounds like a problem with the charm
<shang> fwereade: does the 3306 needs to be open for the service to be configured?
<fwereade> shang: I'm afraid I don't know, everything I've done has been on the provider side... I've not had much experience debugging charms
<shang> fwereade: ah, ok...
<fwereade> wrtp, sorry, I *completely* missed you
<fwereade> shang: just a suggestion
<wrtp> fwereade: it was nothing anyway :-
<wrtp> )
<fwereade> shang: FWIW, deploys and exposes fine for me
<wrtp> fwereade: BTW it was me that submitted that doc fix - isn't the documentation automatically updated?
<fwereade> shang: what's your juju-origin, and where did you get your charms from?
<fwereade> wrtp: I recall jimbaker saying that the auto-updating wasn't working, but I was distracted and never followed up
<wrtp> fwereade: i se
<wrtp> e
<shang> fwereade: I was using the -> sudo apt-get install charm-tools; charm update examples; charm getall examples
<shang> fwereade: let me refresh them, perhaps
<fwereade> shang: I'm not sure this is a relevant question, but just in case: do the charms you're deploying have a "revision" file?
<shang> fwereade: yes, wordpress: rev. 30
<shang> mysql rev. 103
<fwereade> shang: hm, I'm feeling pretty short on ideas :(
<shang> fwereade: um... :(
<fwereade> shang: it might be worth trying to just deploy wordpress from scratch with a debug-log running
<fwereade> shang: (all I mean is that the problem isn't obvious, not that I'm giving up ;))
<shang> fwereade: thanks! :D
<shang> fwereade: Let me give that another try
<fwereade> shang: (open a separate terminal and "juju debug-log" before you deploy)
<shang> fwereade: actually, I will run it on a different machine, maybe a fresh one and see if I can reproduce the issue
<fwereade> shang: cool, I'm about to try with the charms from charm-tools instead of trunk, see if I can repro that myself
<shang> fwereade: thanks a lot
<shang> fwereade: what is the command (or location) you get the charms?
<fwereade> shang: I've always just tested with the examples repo in trunk
<shang> fwereade: ok
<fwereade> shang: fyi, I'm seeing the same problem
<fwereade> shang: no idea why yet ;)
<shang> fwereade: so it is because the charm-tools
<fwereade> shang: that seems likely
<fwereade> shang: but I can't say exactly what yet
<shang> fwereade: ok, at least we know what is causing it... which is a good start :-)
<fwereade> shang: yep -- thanks :)
<hazmat> g'morning
<fwereade> hazmat: morning
<hazmat> juju docs ticket is pending here fwiw.. https://portal.admin.canonical.com/48456 fwiw
 * hazmat tries to catch up on the back log
<fwereade> hazmat: sweet
<fwereade> ty
<hazmat> fwereade, np
<hazmat> shang, so you've got a service deployed and exposed and its not available?
<hazmat> and juju status says its 'started'?
<hazmat> shang, could you paste bin the output of 'juju status'
<hazmat> oh
<hazmat> fwereade, your seeing it too?
<fwereade> hazmat: yeah, I guess it's something to do with the wordpress charm
<fwereade> hazmat: I'm floundering along in a semi-helpful way, but this is the first charm I've made any attempt at debugging ;)
<hazmat> shang, fwereade, so getting the unit log is pretty helpful to understanding unit specific problems
<hazmat> shang, fwereade before deploying, you can get it directly from juju if you start a juju debug-log in a separate shell before deploying/relating .. it captures all the logs from all the agents..
<hazmat> shang, fwereade after the fact, you can use 'juju ssh wordpress/0' to login directly to the machine
<hazmat> the log for a unit lives at /var/lib/juju/units/wordpress-0/formula.log   i believe
<fwereade> hazmat: hm, I was aware of the existence of juju ssh, I just never thought of actually using it :/
<hazmat> fwereade, no worries, we have better tools as well..
<fwereade> hazmat: debug-log is helpful, indeed
<shang> fwereade: I ran the command: bzr branch lp:~charmers/charm/oneiric/mysql/trunk mysql
<shang> fwereade: still getting the same issue
<shang> hazmat: let me get u the logs
<hazmat> fwereade, shang  real debugging.. is using juju debug-hooks wordpress/0, right after deploying the unit, it will set up a tmux session on the machine
<hazmat> and pop up new windows for hook executions, with all the hook env variables setup. you can manually execute the hook or interactively edit/perform work
<hazmat> its good to have a log first of what's wrong thouh
<shang> hazmat: http://pastebin.ubuntu.com/707872/
<shang> hazmat: ok, let me try again
 * hazmat tries with the local provider
<shang> fwereade: which trunk did u use?
<fwereade> shang: sorry, I was referring to the examples in juju trunk
<shang> fwereade: ah, ok
<hazmat> shang, so status looks good,  getting the /var/lib/juju/units/<unit-name>/charm.log file  is probably needed to debug a charm further
<hazmat> we should probably have a special cli option builtin for this purpose
<shang> hazmat: shouldn't the open-ports have the 80 in it?
<hazmat> shang, it should
<hazmat> shang, the wordpress charm in principia never does open-port
<hazmat> which is why its broken
<hazmat> shang, nice catch
<shang> hazmat: so we start the wordpress instance
<shang> and run the command:  juju debug-hooks wordpress/0
<shang> in another terminal to see the debug info?
<hazmat> shang, yes.. its not debug info.. its a tmux session .. where windows/shells will pop up that 'replace' a hook execution, instead activity done interactively by the user is the hook, when your ready for the hook to be done, you exit the pop'd up window
 * hazmat works on fixing principia wordpress
<fwereade> hazmat: would you let me know when you're done? I had this theory that was *probably* the issue, but haven't managed to actually fix it, and it would be nice to see the successful diff
<fwereade> (and I wasn't going to go and confidently pronounce how to fix the problem until I'd actually, y'know, done so)
<hazmat> fwereade, normally its just .. adding a call to open-port anywhere in the formula, the wordpress in principia (or whatever its called) is derived from juju/examples.. but diverged prior to the bashification.. in this case i'm effectively doing a hand merge of the bash script
<hazmat> 100% of all open-port usage is non dynamic
<hazmat> zero calls to close port
 * hazmat takes a deep breath and moves on
<fwereade> hazmat: ah, I wondered about that
<fwereade> cheers
<shang> hazmat: do you still need the charm.log?
<hazmat> shang, no its cool, thanks though
<shang> hazmat: ok, thanks
<_mup_> juju/wordpress r50 committed by kapil.thangavelu@canonical.com
<_mup_> pull from juju trunk, bashify, and include open-port call
<hazmat> shang, if you update your branch/checkout of the formula it should work now
<hazmat> shang, you'll need to destroy the service and redeploy with the new formula..  i didn't include an upgrade script
<shang> hazmat: ok, let me try
 * hazmat should go back to sleep
<wrtp> hazmat: so, FindMachineSpec
<wrtp> hazmat: the problem with returning a list of possible specs is that it might be outrageously long
<wrtp> with all combinations of n parameters
<hazmat> wrtp, yeah.. if its encapsulating all permutations
<wrtp> yeah, so i think it might be better to expose some interface for finding possible values of each parameter
<hazmat> wrtp, so i was thinking something a bit more simple..
<hazmat> exactly
<wrtp> as in: possible RAM config, possible OS image, possible location, etc
<hazmat> think about driving a ui for example is a good scenario to keep in mind
<hazmat> ie. what would you want to see
<SpamapS> hm, this sounds a lot like facter
<wrtp> SpamapS: facter?
<SpamapS> facter shows you RAM, OS, CPU#, etc.
<hazmat> SpamapS, its more akin to dash or ec2 ui
<SpamapS> its used a lot in puppet
<hazmat> and chef
<SpamapS> I missed the context tho
<wrtp> SpamapS: i was trying to come up with a nice way of specifying a machine to start
<hazmat> SpamapS, just discussing environment interfaces in go
<SpamapS> oh like, you want to choose the machine type for the user?
<hazmat> wrtp, so i wouldn't worry about the enumeration stuff for now, we can add that latter
<wrtp> hazmat: yeah
<hazmat> SpamapS, no.. more like we want to give the user the option, and be able to validate it
<hazmat> or present a ui with options
<wrtp> SpamapS: for reference, here's my first stab at a spec/doc for a Go interface to juju:  http://paste.ubuntu.com/707950/
<wrtp> hazmat: yeah, i'll keep it ultra simple for now, with the expectation of fleshing it out later
<hazmat> SpamapS, but carry the user selection down to the provider
<SpamapS> Wow I'm quite confused
<SpamapS> at what point do you get to ask users questions?
<hazmat> SpamapS, right now its kinda broken we only grab it from the config file, a user specification is passed down to the provider
<SpamapS> (at what point would we ever WANT users to be bothered with questions?)
<hazmat> SpamapS, at deploy time
<hazmat> SpamapS, optionally we fall back to env defaults
<hazmat> SpamapS, i want to deploy cassandra on a HUGE machine ;-)
<wrtp> i'm more imagining an intelligent choice based on a previously specified user constraint
<wrtp> rather than user interaction per se
<hazmat> SpamapS, but haproxy on  a tiny machine
<hazmat> the size of cassandra is based somewhat on usage
<SpamapS> Hrm, I doubt users want to be stopped and asked about this
<hazmat> interesting
<SpamapS> if they don't do --ram BIG or --machine-type m1.large  ... env default seems appropriate
<hazmat> and that's what it will continue to be
<wrtp> i'm thinking that the user probably wants to be able to verify that they won't be paying more than a certain amount of money
<SpamapS> it would DEFINITELY be cool to map abstract arguments to provider machine types
<wrtp> SpamapS: that's the idea
<SpamapS> --budget-per-hour 0.50
<wrtp> yup
<SpamapS> But to ask the user.. fail
<wrtp> i think that's pretty crucial actually
<SpamapS> just say "Cannot determine best machine type, options { x, y , z }" and exit(-1)
<wrtp> SpamapS: yeah. although the user should be able to iterate without spending, i think. explore the possibilities.
<SpamapS> Still
<SpamapS> this sounds like we're getting way ahead of being awesome at what we currently do. ;)
<wrtp> SpamapS: more like: "no machine type available that meets your budget constraints" perhaps
<wrtp> SpamapS: this was all spawned from discussion about the FindImage method in the go docs i posted above
<SpamapS> as a first iteration, just adding --machine-type XXXXXX would be a quantum leap
<wrtp> SpamapS: yeah, we're way ahead of ourselves, but it probably helps to have an idea of where we might go
<SpamapS> and also being able to change the machine type for a running service would be good
<wrtp> and i do think being able to plan your budget (and explore different budgets across different providers) is going to be important in the long run
<wrtp> SpamapS: is that actually possible?
<SpamapS> wrtp: sure, change it, then issue a "replace units" command that would slowly remove the old type and add the new type
<SpamapS> assuming its a service that has the ability to do that.. :)
<SpamapS> if not.. well then.. don't do that!
<wrtp> :-)
<SpamapS> point being, add-unit just uses whatever was the env default at the time of deploy..
<wrtp> SpamapS: so {add-unit; remove-unit}should do the job?
<wrtp> where remove-unit is removing an old unit
<wrtp> not the one just added
<SpamapS> sorry to derail, what I am trying to convey is that you're enhancing something that has a weak foundation .. might be good to get a structure in place .. I like the end goal a lot tho
 * SpamapS is just grasping at anything that will distract him from trying to figure out how to elastically grow a ceph cluster
<wrtp> SpamapS: hmm, just wondering about "whatever was the env default at the time of deploy..". does that mean we'd have to push the current environment to zk on each state change?
<wrtp> lol
<wrtp> is there a list anywhere of "current juju shortcomings that we'd like to address"?
<SpamapS> wrtp: the problem is that the provisioner that actually starts machines and assigns them to units has only one clue about the machine type, and thats the one in the service state in ZK
<SpamapS> wrtp: the unit should be able to provide an overriding clue
<SpamapS> wrtp: https://bugs.launchpad.net/juju
<wrtp> of course
<SpamapS> only 160 ;)
<SpamapS> https://bugs.launchpad.net/juju/+bugs?field.tag=production
<SpamapS> those are the ones we (or maybe just I) think are needed to be fixed for production usage of juju.
<SpamapS> wrtp: bug 829397 is the one I'm basically describing
<_mup_> Bug #829397: Link a service to a type of hardware and/or specific machine <production> <juju:Confirmed> < https://launchpad.net/bugs/829397 >
<wrtp> SpamapS: that sounds like it's not something coming from the current environment, but from some specification of the service itself.
<SpamapS> wrtp: when deploy happens, the current environment default is copied into the service def
<hazmat> ?
<wrtp> SpamapS: hmm. i think i prefer the idea of specifying an environment for a service rather than changing environments.yaml every time. but perhaps that's what would happen.
<hazmat> wrtp, yeah.. that's a major failing re copying the env per deploy to capture changes
<hazmat> its easy to fix that
<hazmat> and we need to do it for multi-user usage
<SpamapS> all things in environments.yaml that are not global should be runtime overrideable and changeable via some kind of command
<SpamapS> ami, machine type, etc. etc.
<wrtp> definitely.
<hazmat> we'll bootstrap with the environments yaml and thereafter use some variation of juju set to set env values
<wrtp> for things like default machine type etc, i'd imagine they would be (should be?) copied into the cloud only once, at bootstrap time
<wrtp> exactly
<SpamapS> hazmat: but what about that one time where I want to add-unit x1.large .. just to handle today's ridiculous load.. then back to m1.small's
<hazmat> SpamapS, i'd like to go down the road of deploy cli parameters
<wrtp> SpamapS: maybe the add-unit command should allow specification of things like machine type
<wrtp> hazmat: yeah
<SpamapS> let me override it at deploy time yes, but also let me override even that.
<wrtp> add-unit > deploy > bootstrap
<hazmat> SpamapS, ? huh
<hazmat> yeah
<SpamapS> hazmat: if I deploy with one type, I may want to change that later
<hazmat> SpamapS, sounds good, we just run the risk of turning into knife if we expose all cli options
<hazmat> but their always optional
<wrtp> hazmat: knife?
<SpamapS> so add-unit needs an override at the cli level, and I also need to be able to 'juju set service-name machine-type=foo' to change it permanently
<SpamapS> hazmat: be religious about always having a sane default and you won't become knife. :)
<wrtp> SpamapS: and probably juju set-default machine-type=foo
<hazmat> wrtp, its the chef cli tool for management
<SpamapS> juju deploy foo should always give you a workable foo
<wrtp> hazmat: ah thanks
<SpamapS> anyway, I'm highly impressionable, and this recent "Amazon gets platforms" rant from G+ has me thinking about how juju sits as a platform
<hazmat> SpamapS, do tell
<SpamapS> I'd say its better than some, but has a long way to go to be accessible
<hazmat> SpamapS, and setting that on a service would do what to existing units?
<SpamapS> hazmat: leave 'em alone
<hazmat> SpamapS, we're working on it re accessible ;-)
<SpamapS> right, I know a REST interface is in the works, +10 on that
<hazmat> i'm going to explore some api work today
<hazmat> yup
<wrtp> currently we talk directly to the zk instance in the cloud, right?
<SpamapS> yes
<hazmat> wrtp, via ssh tunnel from the cli
<SpamapS> which should definitely go away
<hazmat> wrtp, its painfully slow for some ops
<hazmat> on large installs, many roundtrips
<wrtp> yeah. a higher level interface would be better.
 * SpamapS curses ceph's incessant "then find some way to copy this little dir to all other nodes" documentation
<hazmat> SpamapS, if only you had a distributed file system for that ;-)
<SpamapS> so ironic
<wrtp> some kind of json rpc thing might not be too horrible
<SpamapS> BSON ftw.. ;)
<hazmat> wrtp, yeah.. thats basically where i'm going .. effectively json rpc to expose the current cli as rest, and then some REST expose of resources
<hazmat> although the latter is probably superflous for the first cut
<wrtp> sounds plausible
<wrtp> niemeyer: hi!
<niemeyer> wrtp: Yo
<hazmat> niemeyer, greetings
<fwereade> morning niemeyer
<niemeyer> Hey folks
<hazmat> fwereade, the juju get stuff changed a little since the review to incorporate feedback, not sure if you wanted to have a second look before its merged
<hazmat> basically the separate option for schema went away, it just merges the schema with the current values for display now
<fwereade> hazmat, sounds like a nice idea actually
<fwereade> hazmat: I'll take a quick look
<hazmat> fwereade, thanks
<hazmat> fwereade, bcsaller, jimbaker also there's a trivial but critical fix resolved branch in review.. its only like 10 lines.. https://code.launchpad.net/~hazmat/juju/retry-sans-hook/+merge/79358
<hazmat> SpamapS, are we tagging SRU bugs separately, or you planning on just grabbing trunk?
<jimbaker>  hazmat, taking a look
<hazmat> er.. fix for resolved
<jimbaker> hazmat, +1, lgtm
<fwereade> hazmat, the docstring on command() needs updating, otherwise +1
<fwereade> hazmat, for the other one, I don't follow the connection between the code change (which looks good) and the test change
<_mup_> juju/expose-retry r408 committed by jim.baker@canonical.com
<_mup_> Merged trunk
<hazmat> fwereade, the test was making the bad hook ... into a good one, and then calling resolved, by leaving it bad we verify the end state was reached without hook execution
<fwereade> hazmat: ok, I see now
<fwereade> hazmat: bit slow today :/
<fwereade> hazmat: +1
<hazmat> jimbaker, fwereade thanks
<hazmat> fwereade, its subtle one
<_mup_> juju/expose-retry r409 committed by jim.baker@canonical.com
<_mup_> Addressed review points
<_mup_> juju/trunk r405 committed by jim.baker@canonical.com
<_mup_> merge expose-retry [r=hazmat,fwereade][f=824279]
<_mup_> Ensure that port actions related to expose are retried, even when
<_mup_> unexpected exceptions are raised.
<jimbaker> ok, that bug fix is merged in. i'm taking today off (my kids are out of school today instead of monday for some reason). yet another beautiful day here in colorado :)
<_mup_> juju/config-get r397 committed by kapil.thangavelu@canonical.com
<_mup_> cleanup docs
<hazmat> jimbaker, cheers, have a good one
<jimbaker> i should also mention: when i walked my puppy this morning, i struck up a conversation about ubuntu with a new neighbor. i had my ubuntu pullover on for the chill morning. yet another big fan of ubuntu, good to see!
<fwereade> jimbaker, pleasing :)
<_mup_> juju/trunk r406 committed by kapil.thangavelu@canonical.com
<_mup_> merge config-get [r=fwereade,bcsaller][f=828326]
<_mup_> New juju subcommand for inspecting a service's current configuration and schema.
<_mup_> juju/trunk r407 committed by kapil.thangavelu@canonical.com
<_mup_> merge retry-sans-hook [r=fwereade,jimbaker][f=814987]
<_mup_> Fixes a bug with unit agent usage of resolved flags that caused
<_mup_> resolved to always execute hooks, instead of when hook retry was
<_mup_> explicitly specified.
<fwereade> happy weekends all, I'll probably drop in a bit later but I think I'm done for now
<hazmat> fwereade, have a good one
<_mup_> Bug #874423 was filed: rest/jsonrpc api  <juju:New> < https://launchpad.net/bugs/874423 >
<_mup_> juju/rest-agent-api r402 committed by kapil.thangavelu@canonical.com
<_mup_> merge trunk
<wrtp> fwereade: enjoy
<SpamapS> hazmat: I am going to grab trunk, but reconcile each bug in the changelog
<hazmat> SpamapS, ic, just wondering if  we should putting in test cases for each bug thats not a feature
<hazmat> or tagging them in some way
<SpamapS> hazmat: that would be helpful
<SpamapS> hazmat: lets not get too procedure bound though.. since in this instance, we are "the community" .. we can more or less demand an SRU as long as we aren't flagrantly dropping refactors
<SpamapS> hazmat: but it will go a long way to user trust that we don't break stuff in an update
<_mup_> Bug #874456 was filed: Initial Go juju commit. <juju:In Progress by rogpeppe> < https://launchpad.net/bugs/874456 >
<wrtp> hazmat: ^^
<wrtp> is there any way to get the bzr diff web page to show the diffs between two revisions?
<wrtp> e.g. here http://bazaar.launchpad.net/~rogpeppe/juju/juju-implementation/changes
<wrtp> i'd like to see diffs between rev 11 and rev 15
<wrtp> or do i have use a local tool for that?
<wrtp> hmm, i just managed to do it, but i'm not quite sure how!
<hazmat> wrtp, there's a couple of cli tools for this
<hazmat> wrtp, i highly recommend the qbzr plugin
<wrtp> plugin for what?
<hazmat> for bzr
<wrtp> ah
<hazmat> wrtp, its a qt ui on top of bzr, adds a bunch of 'q' prefixed commands
<wrtp> can i apt-get it?
<hazmat> wrtp, yes
<wrtp> i've been doing bzr diff --old x --new y --using diffuse
<wrtp> but it's not very satisfactory
<hazmat> wrtp bzr qbzr -r old..new
<wrtp> i just realised that i forgot to make the changes to my source that we discussed earlier
<hazmat> wrtp,  when reviewing a branch i typically do  from within the branch bzr qbzr -r ancestor:path_to_trunk
<hazmat> which shows you a diff of the branch changes against trunk
<wrtp> cool
<hazmat> er.. that should be qdiff not qbzr
 * hazmat needs a nap
<wrtp> hazmat: when you say "path_to_trunk" do you mean the last trunk revision number?
<hazmat> wrtp, no i mean the physical path to trunk
<wrtp> ok. BTW what happens if there's a colon in the path name?
<hazmat> wrtp, thats for a review of  a branch that's getting merged to trunk
<hazmat> wrtp, i dunno, never came up for me
<hazmat> wrtp, shouldn't be an issue
<wrtp> fairy nuff
<hazmat> wrtp, a double colon introduces some lookup behavior for a branch naming service  used by some of the more interesting plugins, like pipeline
<hazmat> which automates a stack of changes
<hazmat> too much information probably
<wrtp> interesting
<wrtp> i'm slooowly getting there with the bzr stuff
<hazmat> wrtp, no worries
<hazmat> wrtp, we should probably a have new developer doc to describe the particulars of best practice bzr layouts for dev
<wrtp> that would be good
<wrtp> i've barely used revision control systems in their full modern glory
<wrtp> hazmat: qdiff FTW!
<wrtp> marvellous
<wrtp> i wish someone had told me that when i asked in the main canonical IRC channel...
<hazmat> wrtp, #bzr on freenode is pretty good stop for bzr questions
<hazmat> that main channel is just a presence thing, not a question place
<wrtp> right
<wrtp> i have to do another qdiff if i want to change the view to diff against another revision, right?
<wrtp> hazmat: BTW i got an email about a monthly report. where is that?
<hazmat> wrtp, priv msg
<SpamapS> hazmat: ah, so r406 is definitely a new feature
<hazmat> SpamapS, backwards compatible but yes..
<hazmat> SpamapS, is that an issue?
<SpamapS> yes and no
<_mup_> Bug #874486 was filed: status should show all relations for multiply-related services <juju:New> < https://launchpad.net/bugs/874486 >
<SpamapS> hazmat: normally we'd have "patch only" releases waved through the SRU process
<SpamapS> hazmat: but since juju has no release process.. it will raise the eyebrows of the SRU team
<hazmat> SpamapS, i can yank, but we need a date for features to go in
<SpamapS> no don't yank!
<hazmat> we're not purely in bug fix mode atm, we have open dev/refactor items up for this milestone
<SpamapS> :)
<hazmat> SpamapS, its easy enough to do.. i just need to be clearer on sru scheduling is
<SpamapS> Just means we have to choose between fighting for the SRU despite the features and cherry picking only the bug fixes
<SpamapS> there is no schedule
<hazmat> SpamapS, my last understanding was we where going to be going through to 12.04 with srus
<SpamapS> TECHNICALLY, SRU's are only for serious issues
<SpamapS> but in universe, what is serious and what is not is entirely up to the community around the package...
<hazmat> SpamapS, so do we really need to SRU everything?
<SpamapS> since that is .. us... ;)
<hazmat> SpamapS, seems like we should just point folks to a stable ppa?
<SpamapS> as I've said, I think if we have some automated tests that verify we didn't break anhing (integration tests, not unit tests) than it should be fnie.
<hazmat> and we can put in a week or two hiatus on features merges, yank config-get, do the sru, open up the trunk to features, and put out a stable ppa for folks who the latest and greatest
<SpamapS> ugh my SSH is bursty
<SpamapS> get off facetime you durn ipad users here in the starbucks!
<hazmat> SpamapS,  that's what this is for http://wtf.labix.org/
<hazmat> SpamapS, it runs unit tests and a functional test
<SpamapS> hazmat: that does not count, because those tests are in tree and may be changed to fit the release
<SpamapS> The tests that work now, must work, unchanged, for every SRU we do
<SpamapS> IMO a stable PPA would also need this level of care
<hazmat> SpamapS, true for the unit tests, but the functional tests haven't been touched, and don't they live in tree... jimbaker ?
<hazmat> er.. i don't think
<SpamapS> They live in their own tree, but are under your control, and will likely be updated as juju is changed in backward incompatible ways.
<hazmat> SpamapS, they'll likely fail first, and we'll see that.. but okay, from a skeptical pov i can see why that's an issue
<SpamapS> Also, they don't really exercise things enough to make me comfortable. I'd like to have all charms deploy, relate to at least one thing, exposed, configs changed, and then destroyed..
<hazmat> SpamapS, okay... well you have some other ftests right?
<SpamapS> I think I can automate that
<hazmat> i need to go back read jamespage's charm tester stuff
<hazmat> but first a nap b4 the sprint
<SpamapS> I think jp's thing is an even higher order
<SpamapS> hazaway: sleeeeeep ;)
<zodiak> hey guys and gals, jst installed ubuntu 11.10, nice to see juju (aka ensemble ;) in there, I wanted to use my local machine, am I correct in thinking I have to setup basic lxc by itself first before trying to use charms ?
<bcsaller> zodiak: It looks like the docs for the local provider are not on the main url yet, but you can look at there here http://bit.ly/nsjdWu
<bcsaller> zodiak: the local provider will tell you if its missing packages when you try to bootstrap it
<SpamapS> can we get a bump on that RT ticket for the docs?
<SpamapS> this will get worse as 11.10 users start looking into juju
<bcsaller> SpamapS: there is also a branch with a troubleshooting script that collects info to help debug local provider issues and that includes an expansion of those docs as well
<hazmat> https://portal.admin.canonical.com/48456
<hazmat> SpamapS, ^ it was 7 earlier today i thought
<hazmat> SpamapS, just sent in additional comment noting the urgency
<hazmat> now its 10 in the queue
<SpamapS> we're dealing with the release turmoil
<SpamapS> but I wonder if its getting bumped because nobody knows its part of the release
<hazmat> i added an additional note to that effect, and that we're fielding suppot requests because of the mismatch of the old docs to pkg in oneiric
<evandev> Is the UI available for Hadoop after Juju is bootstrapped and the environment is setup? (i.e. master with a slave running)
<SpamapS> evandev: which "UI" would you be referring to?
<SpamapS> evandev: and also, which charm? the one at lp:charm/oneiric/hadoop-master ?
<evandev> well im using a local charm from http://github.com/charms/hadoop-master & hadoop-slave
<evandev> and by UI I mean GUI ports 50030 / 50070 / 50075
<evandev> Through those charms tho hadoop is not even getting installed so I guess thats why im running into an error
<SpamapS> evandev: I believe the charms do open those ports
<SpamapS> evandev: are you seeing an error in the install step?
<evandev> Hadoop is not being installed on the hadoop-master/0
<SpamapS> evandev: did you look at /var/lib/juju/units/haddop-master-0/charm.log ?
<SpamapS> evandev: did you look at /var/lib/juju/units/hadoop-master-0/charm.log ?
<SpamapS> i kan spel
<SpamapS> evandev: also you probably need to 'juju expose hadoop-master' or the firewall will block access
<evandev> That was it.
<evandev> Thanks SpamapS very much
<SpamapS> evandev: no problem. Note that there are a bunch more charms at https://code.launchpad.net/charm
<SpamapS> evandev: the github page is just an experiment by m_3 ;)
<SpamapS> m_3: ^^
<evandev> Yea I want to modify a charm to install the CDH3 distrobution from cloudera
<evandev> But thank you again. I really appreciate it
<zodiak> bcsaller, hrm, so, whenever I do a juju deploy --repository=juju-charms local:memcached using lxc, the state never changes from null and I don't get any ip
<bcsaller> zodiak: maybe you could download and run this script http://bazaar.launchpad.net/~bcsaller/juju/local-troubleshooting/view/head:/misc/devel-tools/juju-inspect-local-provider
<zodiak> downloading (and thanks)
<zodiak> pastie up the output I take it ? :)
<bcsaller> yeah, that would be great
<zodiak> http://pastie.org/2697581
<zodiak> it's probably something 'duh' .. and if it is, that's awesome. I don't mind being an idiot :D
<m_3> SpamapS: dang, those are still up from the openstack demo... different charms for natty -vs- oneiric... I'll fix it
<m_3> the real fix is to get the oneiric hadoop packages into the partner ppa
<bcsaller> zodiak: other than seeing a typo in that script I don't see the issue yet, still looking though
<zodiak> bcsaller, thanks.. sorry about all this :)
<bcsaller> zodiak: no problem at all
<bcsaller> zodiak: it looks like its building out the initial image cache in /var/cache/lxc as expected, but we'd then expect an image to later appear when lxc-ls is run
<bcsaller> zodiak: is this still running or have you stopped it with a destroy environment?
<zodiak> I have done bootstrap and destroy a number of times
<zodiak> how long should I wait for the state to change ? I waited like 45 minutes earlier
<zodiak> and I am on FiOS so.. ;)
<bcsaller> zodiak: ... it shouldn't take that long at all. So this script is new, this is helpful in terms of me finding out what info to gather. I'm going to include some changes and maybe we can try running it again :)
<zodiak> surely
<zodiak> let me try and rm -rf the cache/lxc and bootstrap again
<zodiak> hrm
<zodiak> I think it's something permissions related
<zodiak> I am doing juju bootstrap as my user, not a problem. do a deploy and /var/cache/juju becomes owned by root
<bcsaller> zodiak: yes, local provider will ask for permissions via sudo when it needs them
<bcsaller> lxc interactions happen as root
<zodiak> ah
<zodiak> then shut my mouth :D
<bcsaller> zodiak: in your ~/.juju/environments.yaml you defined a data-dir for the local provider, is the bootstrap creating a directory as expected in that data-dir?
<zodiak> let me double check
<zodiak> it is indeed
<zodiak> charms, files, state and units (and a nice log ;)
<bcsaller> indeed
<bcsaller> the machine-agent.log, could you paste the last 20 lines or so
<zodiak> surely.. want me to wipe that and lxc cache clean first ?
<bcsaller> zodiak: can't hurt :) will take a little longer
<zodiak> eh.. I am at work.. not a problem ;)
<hazmat> zodiak, it would be good to pastebin the master-customize.log it sounds like
<hazmat> any problem creating units
<hazmat> is typically related
<bcsaller> hazmat: I was making sure it was getting that far first
<bcsaller> lxc-ls returned nothing, implying no template
<hazmat> bcsaller, oh.. yeah.. machine-agent.log indeed
 * hazmat goes back to mediawiki hacking and lurking
<bcsaller> hazmat: :)
<bcsaller> hazmat: you're back home after this weekend, right?
<hazmat> bcsaller, monday late night
<bcsaller> I'd like to schedule some time, maybe Tues to talk about the Co-lo stuff a little, I've been working through the old comments and some newer thinking and would like to go over it with you
<zodiak> bcsaller, the last line of the machine-agent.log is looking hopeful ..
<zodiak> 2011-10-14 16:05:32,853: unit.deploy@DEBUG: Creating master container...
<zodiak> will let you know how it goes ;)
<bcsaller> that is a good sign
<hazmat> bcsaller, sounds good
<bcsaller> zodiak: soon lxc-ls should show that it has a master template
<bcsaller> once you have this working subsequent deployments are much faster
<zodiak> yup. it does indeed.
<zodiak> lxc-ls does show 'stef-sample_local-0-template
<zodiak> but juju status still says state:null and no ip address
<bcsaller> thats expected at this point
<zodiak> awesome :)
<bcsaller> it works by creating a template
<bcsaller> and then it clones that for unit deployments
<bcsaller> but creating the master the first time can take a little while
<zodiak> ah. so even though lxc-ls .. gotcha
<zodiak> danke
<bcsaller> now, kapils suggestion will become important though, in data-dir/units there a master-customize.log
<bcsaller> that that will provide insight into the creation of the master container
<hazmat> a good sanity check if its done .. is if ps aux | grep lxc
<hazmat> has any output
<hazmat> if not...  lxc-ls should show stuff
<bcsaller> hazmat: in the troubleshooting section of the doc I recommend: pgrep lxc| head -1| xargs watch pstree -alU
<zodiak> ps auxfw to the rescue.. still chunking away :)
<bcsaller> zodiak: yeah, the output of the master-customize.log is only written to the log file at the end of the run, but it will help us if this doesn't work
<zodiak> bcsaller, okay, so, machine-agent.log says 'juju.agents.machine@INFO: Started service unit memcached/0'
<bcsaller> zodiak: thats a good sign
<zodiak> which is nice.. looking good there..
<zodiak> but nothing in the units/ folder
<bcsaller> and juju status hasn't changed?
<zodiak> and state is still null :\
<bcsaller> does "juju ssh memcached/0" work?
<zodiak> nope. sticks on 'waiting for unit to come up'
<hazmat> bcsaller, unit agent grep, and manual chroot and upstart perhaps
<zodiak> I didn't do any AWS on this machine before, I don't need to setup them if I am using lxc correct ?
<hazmat> zodiak, correct
<zodiak> hazmat, danke.
<zodiak> sorry about all this guys+gals :(
<bcsaller> hazmat: yeah, I think we should be able to ssh as ubuntu into the box
<bcsaller> zodiak: happy to help get this working
<bcsaller> zodiak: we want to get the ip address of the unit so we can ssh in
<bcsaller> zodiak: did status have it yet or no?
<zodiak> nope.. still no status :(
<bcsaller> there are other ways
<bcsaller> I usually use nmap -sp 192.168.122.0/24 but you might not have that installed
<bcsaller> looks like this should work for you though
<zodiak> yup.. I have nmap
<bcsaller> host stef-sample_local-memcached-0 192.168.122.1
<bcsaller> oh, with nmap it was -sP (uppercase) anyway
<zodiak> got it (the nmap I mean)
<zodiak> and yes .. .1 is up
<bcsaller> 1 is the host machine, if there isn't another ip on that bridge then it didn't bring up another container yet
<bcsaller> what does lxc-ls show now?
<zodiak> ah ... 228 in that case
<hazmat> .1 is the bridge address, also has dnsmasq..
<bcsaller> ahh, ok
<hazmat> zodiak can you login into that .228 address (ubuntu@)
<bcsaller> ssh ubuntu@192.168.122.228 should get you into that container
<zodiak> it does indeed :D
<zodiak> status still reports it as null state and no ip (fyi)
<bcsaller> the ubuntu account will have full sudo access and can look at the logs in /var/log/juju which should help us understand
<zodiak> gotcha. let me take a look see
<zodiak> huh
<zodiak> that log directory is empty
<bcsaller> in other shell can you run the script I gave you before as root with stef-sample_local-memcached-0 as the cmdline argument
<bcsaller> it should pull /etc/juju/juju.conf and so on to help us review that
<bcsaller> in the container we can also check /etc/init for the upstart job for the charm
<zodiak> http://pastie.org/2697796
<bcsaller> I suspect that the charm itself is failing
<zodiak> oh.
<hazmat> ls: cannot access /usr/lib/juju/juju: No such file or directory
<zodiak> bad luck on my part choosing that charm ?
<hazmat> bcsaller, ^
<hazmat> JUJU_ORIGIN=distro
<bcsaller> zodiak: the examples that come with juju, mysql and wordpress work
<bcsaller> hazmat: I saw that, but the juju package doesn't appear to be installed
<hazmat> bcsaller, that's the root issue isn't it
<bcsaller> I think so, yes, wondering how that happened. We'd been using the PPA for most of the testing, but that should be working as well...
<hazmat> zodiak, can you try putting juju-origin: ppa in environments.yaml... destroy-environment && bootstrap
<hazmat> don't need to clear out the cache
<hazmat> the lxc cache that is
<zodiak> hazmat, can do.. one sec
<bcsaller> zodiak: the master-customize.log was never written though, right?
<zodiak> if nothing else, I am getting used to juju :)
<bcsaller> in data-dir/units
<zodiak> bcsaller, correct
<bcsaller> that should have indicated if there was an issue installing the package.. .oh well. setting it PPA as suggested should work
<zodiak> where do I grab the charms from ?
<hazmat> zodiak, can you pastebin the entire master-customize.log
<zodiak> maybe it's something out of date there ?
<hazmat> this is before the charm is touched even, so thats not an issue atm
<hazmat> but the charms are in bzr on launchpad..
<zodiak> okay, sorry, where is the master-customize.log ?
<hazmat> a listing of them is at https://code.launchpad.net/charm
<bcsaller> in your data-dir/units directory
<bcsaller> but you said it wasn't there
<zodiak> it's there after doing the ppa and destroy/bootstrap
<hazmat> its definitely there if its on to creatin units
<zodiak> let me pastie. one sec.
<bcsaller> zodiak: I also pushed a newer version of the script
<zodiak> http://pastie.org/2697831
<zodiak> bcsaller, okay... although take a look at the pastie.. it appears to be a bridging issue from the lxc .. at least, that's what it looks like :(
<zodiak> strange
<bcsaller> zodiak: did you install apt-cacher-ng and is it running?
<bcsaller> 192.168.122.1:3142 in a browser should tell you right away
<bcsaller> there is a 'stats' link on the page you should see if its working which show you how the apt cache is working
<bcsaller> zodiak: was that running or not? from the log it appears not to have been, which is odd because the local provider checks that its installed (but not running I guess)
<hazmat>  zodiak ps aux | grep apt-cacher .. shows something ?
<hazmat> in the host
#juju 2011-10-15
<bcsaller> hazmat: not responding to hails, send a rescue team
<bcsaller> hazmat: we should include a exit code output check on /etc/init.d/apt-cacher-ng status
<hazmat> service apt-cacher-ng status
<hazmat> more localization fun
<bcsaller> not with the exit code
<bcsaller> but about that, the virsh thing, its not for user interaction, we can force the locale when we exec it
<zodiak> sorry, I got sidetracked by work ;)
<zodiak> service apt-cacher-ng start fails to start
<bcsaller> zodiak: np
<bcsaller> zodiak: did you do it with sudo?
<zodiak> huh.. what the .. install: cannot change owner and permissions of `/var/run/apt-cacher-ng': No such file or directory
<zodiak> and even if I do sudo, it still says ; Problem creating log files. Check permissions of the log directory, /var/log/apt-cacher-ng
<bcsaller> sounds like the package didn't install properly but thats very odd
<zodiak> okay, so, time to bug the ubuntu peeps
<bcsaller> zodiak: you might try reinstalling the package and seeing if that leaves things in the same state or not
<zodiak> bcsaller, good call
<zodiak> okay, so, apt-cacher-ng installed fine the second time (not hugely reassuring)
<bcsaller> I'd destroy-environment and then bootstrap again
<zodiak> woohooo
<zodiak> bcsaller, I did indeed.. and.. public-address is now in the juju status :D :D
<zodiak> awesome!
<bcsaller> zodiak: very good
<zodiak> and I am going to kill whoever packaged up apt-cacher-ng
<zodiak> :D
<bcsaller> ha
<bcsaller> zodiak: failure is just another word for lesson :)
<zodiak> yes.. or murder :P
<bcsaller> fair enough
<bcsaller> that's firmly under the heading of bad juju though
<zodiak> although now I have a fun weekend ahead of me, trying to figure out how to create custom charms to help deploying our rails app
<zodiak> *GROAN*
<zodiak> you couldn't resist.. could you ? :)
<bcsaller> I'm weak
<bcsaller> what can I say
<zodiak> *chuckles* well, I think you have earned your bad puns for the day ;)
<zodiak> danke everyone, bcsaller, hazmat
<bcsaller> zodiak: try the examples mysql, wordpress if you want to see it working locally with something a little more addressable
<hazmat> zodiak, awesome.. go forth and multiply ;-)
<bcsaller> its nice to see that level of isolation running on a local box
<bcsaller> hazmat: that was an odd one, huh? :)
<bcsaller> failure in the apt-cacher-ng package
<bcsaller> heh
<hazmat> bcsaller, packages are a bunch of random scripts
<hazmat> ;-)
<bcsaller> I've heard that somewhere
<zodiak> hrm. is there an 'undeploy' for juju ?
<zodiak> unexpose but .. no undeploy ?
<m_3> zodiak: I have an old rails example here http://bazaar.launchpad.net/~mark-mims/+junk/ensemble-rails/files
<m_3> I'll be happy to help modernize it after the weekend (at a conference atm)
<zodiak> m_3 ooohhh.. awesome! thank you :)
<zodiak> surely
<m_3> it's for rails3, but there're references to "ensemble" instead of "juju"
<m_3> but the basic idea's still sound
<hazmat> zodiak, destroy-service is undeploy
<hazmat> kills the containers as well (minus the master used for creating new ones)
<zodiak> aaahhh
<m_3> hazmat: where would I look to find logging during "expose"?
<hazmat> m_3, provisioning agent
<hazmat> m_3, /var/log/juju/
<m_3> k thnaks
<hazmat> m_3, it should come out of a debug-log as well
<yitno> hi all
<xxiao> is juju for public cloud or can it be used for private cloud?
<SpamapS> xxiao: it supports any EC2 provider, in theory
<SpamapS> xxiao: its been tested against AWS and OpenStack.
<SpamapS> xxiao: also a long time ago it was tested against eucalyptus 2.0
<_mup_> Bug #874801 was filed: Juju should have a capistrano renderer for status <juju:New> < https://launchpad.net/bugs/874801 >
<backburner> juju owns
<hazmat> m_3, its time to go to work ;-)
<eagles0513875> hi guys :)
<_mup_> Bug #875042 was filed: allow for import/export of a environment or subset <juju:New> < https://launchpad.net/bugs/875042 >
<_mup_> juju/unlocalize-network r408 committed by kapil.thangavelu@canonical.com
<_mup_> unlocalize the libvirt network integration
<hazmat> koolhead17, greetings... again ;_)
<koolhead17> hi
<hazmat> koolhead17, so you would have a db-relation-changed hook in moodle
<koolhead17> yes
<hazmat> that would check (using relation-get ) for the username/password/db name
<koolhead17> yes
<koolhead17> enmand_: welcome
<enmand_> Hi there
 * enmand_ waves
<koolhead17> enmand_: https://juju.ubuntu.com/docs/
<koolhead17> )
<koolhead17> hazmat: user=`relation-get user` password=`relation-get password` host=`relation-get host`
<koolhead17> am reffering to that drupal charm
<koolhead17> hazmat: everything lies " db-relation-changed "
<hazmat> koolhead17, back.. so yeah that's the ticket
<hazmat> koolhead17, but you have to check if its set to a value, and if not exit the hook, and it will be called again when it is
<hazmat> m_3, what's your twitter handle?
<koolhead17> hazmat: so how will mysql server know what all things it has to pass
<hazmat> koolhead17, its relation hook has a known set of things it needs to allocate and set for a new relation
<koolhead17> or how would i know in my formula from which charm i should get what
<hazmat> koolhead17, you know because your charm needs some service, and there's a service for that charm.. the 'interface' for the relation defines a communication protocol
<hazmat> but its loosely typed/spec atm
<hazmat> koolhead17, we have some documentation on the wiki regarding some of the more common interfaces and what they define for values
<koolhead17> hazmat: i have a situation where my settings.php file wants hostname for the running machine
<koolhead17> where and how will i define that
<koolhead17> App running machine
<hazmat> koolhead17, unit-get public-address
<hazmat> in a hook will get the public hostname for the unit
<koolhead17> hazmat: where will i define it
<hazmat> its available in any hook
<koolhead17> ok
<Brimzen3> hallo
<Brimzen3> Where are you from
<koolhead17> Brimzen3: ?
<_mup_> juju/assume-local-ns-if-local-repo r408 committed by kapil.thangavelu@canonical.com
<_mup_> auto qualify local ns if local repo specified and no namespace
<_mup_> Bug #872164 was filed: [Oneiric] Cannot deply services - store.juju.ubuntu.com not found <juju:In Progress by hazmat> < https://launchpad.net/bugs/872164 >
<m_3> hazmat: http://paste.ubuntu.com/708845/
<m_3> hazmat: http://paste.ubuntu.com/708846
<hazmat> m_3, thanks
<hazmat> m_3, which is which?
<hazmat> m_3, nm
<hazmat> m_3, so it looks like there's some sort of odd race in the db-changed hook
<hazmat> m_3, on wiki2 when its called the second time it just exits
<hazmat> no output
<hazmat> m_3, on wiki1 on the second call it does additional stuff
<hazmat> seems like a good time for debug-hooks
<m_3> yup
<hazmat> m_3, could you paste bin your creation script?
<m_3> sure
<m_3> hazmat: http://paste.ubuntu.com/708850/
<hazmat> m_3, thanks
 * hazmat hugs local provider 
<m_3> too big for local provider though
<m_3> http://paste.ubuntu.com/708852/ is a paired down version
<hazmat> m_3, not my local provider ;-)
<hazmat> m_3, do you have local charm mods?
<SpamapS> m_3: how's the hackering going?
<m_3> SpamapS: great
<m_3> hitting some basic charm bugs
<m_3> relations stepping on each other a bit
<m_3> but we have access to their openstack labs env now
<m_3> everyone just wants to see gource though :)
<SpamapS> m_3: lol
<SpamapS> m_3: glad we could give them a shiny
<SpamapS> m_3: I've been kind of hoping that you guys will completely replace the mediawiki charm with one that kicks serious ass. ;)
<SpamapS> m_3: I suppose its time I made it actually show errors/successes properly. :)
<hazmat> SpamapS, so one problem making it work in labs
<hazmat> SpamapS, it doesn't look like we have the latest python-txaws in the repo for lucid
<hazmat> which is the host we're launching things from
<hazmat> actually the ppa for txaws looks a bit old
<hazmat> i guess i just source install FTW!
<hazmat> lame for them though
#juju 2011-10-16
<enmand_> Is there a default charm repository installed with Juju on Oneiric?
<enmand_> Or does it rely on store.juju.ubuntu.com? (Which doesn't seem to exists?)
<hazmat> enmand_, there are local repos
<hazmat> in oneiric ootb
<SpamapS> hazmat: backportpackage can fix that btw
<SpamapS> hazmat: I actually just finished backporting the oneiric txaws to ppa:clint-fewbar/fixes
<SpamapS> hazmat: the whole dh_python2 mess does make things hard to backport tho
<eagles0513875> hey guys
<eagles0513875> where can i see what charms are available in the juju repo
 * eagles0513875 waves to fwereade in here :)
<enmand_> hazmat, where are the local repos in Oneiric? Are they in a package besides juju?
<m_3> hazmat: try logging out of the labs webpage and log back in
<m_3> Ryan_Lane: euca-run-instances -k wmflabs-mmm-20111016 -t c1.medium ami-00000004
<Ryan_Lane> hazmat: what error are you getting when trying to log into canonical-bridge?
<SpamapS> enmand_: there's a package in ppa:juju/pkgs called 'charm-tools' that will include a command, 'charm getall'
<SpamapS> enmand_: it uses bzr to checkout all of the charms from https://launchpad.net/charm
<enmand_> Ah, OK
<enmand_> So, there is no default charm set for Oneiric?
<m_3> SpamapS: morning
<m_3> SpamapS: is it easy to enable lp review features for lp:charm?
<Ryan_Lane> heh. I figured it out
<Ryan_Lane> bad config
<Ryan_Lane> I'm working on fixing it
<Ryan_Lane> if puppet will ever actually run :(
<Ryan_Lane> m_3: it's working now
<SpamapS> m_3: enable them? err.. they're built in to it.
<SpamapS> m_3: what features are you looking for?
<m_3> SpamapS: just a +1 from anyone in charmers before promulgation
<m_3> I guess the lp review stuff comes automatically with merge proposals
<m_3> SpamapS: but we don't have such things for charms atm
<m_3> SpamapS: (features like submit for review, pending review state, pending review queue for charmers, etc)
<m_3> Ryan_Lane: testing now...
<SpamapS> m_3: well if we were more careful and didn't just push to lp:charm/foo then we could use reviews
<SpamapS> charm-tools still uses bound branches.. which it probably shouldn't.. :-P
<SpamapS> m_3: the review stuff is easily enforcable by policy
<Ryan_Lane> and I confirmed, if you log out and log in, it changes your access and secret key
<Ryan_Lane> working on fixing that now
<m_3> SpamapS: gotcha... can we do this alongside "charmstore" landing?
<m_3> Ryan_Lane: gotcha
<m_3> Ryan_Lane: s/euca-run-instances/euca-ran-instances/ !
<SpamapS> m_3: the charm store stuff I don't know about.. but if bzr branches are in use, the review stuff is built in
<Ryan_Lane> sweet
<m_3> SpamapS: cool
<m_3> total "Dude, where's my car?" moment :)
<SpamapS> m_3: I believe the charm store bits will always push to a personal branch... but I really don't know
<SpamapS> NO WHAT DOES IT *SAY*
<SpamapS> ;)
<hazmat> the use of bzr in the charm store will be transparent
<hazmat> Ryan_Lane, for some reason i get .. Permission denied (publickey).
<hazmat> on ssh attempts
<Ryan_Lane> hmm
<hazmat> Ryan_Lane, i had a look from m_3's login shell it all looks normal
<Ryan_Lane> lemme look at logs
<Ryan_Lane> Invalid user kapil from 12.70.135.2 ;)
<hazmat> RoAkSoAx, never mind
<hazmat> Ryan_Lane, yeah.. just saw that
<hazmat> doh
<hazmat> user fail ;-)
<Ryan_Lane> :D
<Ryan_Lane> your novarc is likely broken too
<Ryan_Lane> lemme fix the problem I'm having, and fix that for you
<hazmat> Ryan_Lane, oh? yeah.. i'm unable to connect to the swift storage it seems
<Ryan_Lane> swift storage?
<Ryan_Lane> we don't have swift storage
<Ryan_Lane> but your access and secret keys are bad
<hazmat> Ryan_Lane, oh.. is there a s3server (from nova) running?
<Ryan_Lane> ah
<Ryan_Lane> only if that's the object store
<Ryan_Lane> and that would be glance
<Ryan_Lane> err
<Ryan_Lane> wait. no
<Ryan_Lane> glance is just for service images
<hazmat> glance is the image store, it layers ontop of the object store
<Ryan_Lane> we have no object store
<Ryan_Lane> no volume support right now either
<Ryan_Lane> did you guys need volume support for this?
<hazmat> Ryan_Lane, that's problematic for juju, we use the objectstore to distribute charms to the instances.. nova include a very simple s3server that just stores things in a directory
<hazmat> Ryan_Lane, we don't need the volume support
<hazmat> its nice to have, but not required, the object store is
<Ryan_Lane> they got rid of the objectstore in cactus, I believe
<Ryan_Lane> ah. wait
<Ryan_Lane> there's a nova-objectstore package still around
<Ryan_Lane> gimme a sec
<hazmat> Ryan_Lane, its in the source tree at nova/objectstore/s3server.py
<Ryan_Lane> yep
 * hazmat updates his branch
<Ryan_Lane> I didn't have the package installed
<Ryan_Lane> it's now on virt1
<hazmat> Ryan_Lane, cool.. currently the generated novarc are referencing a dead s3 server url .. do you know what the correct one is/will be?
<Ryan_Lane> it's correct now
<Ryan_Lane> I just brought the service up
<hazmat> Ryan_Lane, awesome thanks
<Ryan_Lane> yw
<Ryan_Lane> lemme know if it has any issues
<Ryan_Lane> I need to fix your credentials, likely
<hazmat> Ryan_Lane, yeah.. that seems to be the remaining issue
<SpamapS> hazmat: btw, I tried txaws against ceph's RADOS .. worked well except creating buckets.. but I think that may have been a lighttpd fail, not RADOS
<hazmat> SpamapS, nice
<hazmat> SpamapS, is there a charm for that?
<hazmat> ceph that is
<SpamapS> hazmat: yes, the ceph charm
<SpamapS> but its kind of.. in flux. ;)
 * hazmat checks charm world
<SpamapS> Should work for a single node, or 3 node cluster. The difficulty is elasticity.. ceph is just growing things to make that easy
<Ryan_Lane> hazmat: I need to fix the code that keeps changing your credentials, then I'll fix your credentials. heh
<hazmat> Ryan_Lane, sounds good
<hazmat> SpamapS, fair enough.. i'd prefer gluster for most distributed fs usages now.. i still think of ceph as more on the experimental side
<hazmat> ie. only widely deployed by its creating org
<SpamapS> hazmat: the CEPH guys would agree for the mounted FS case. But their objectstore is apparently already seeing extremely heavy use.
<hazmat> SpamapS, outside of dreamhost?
<SpamapS> hazmat: no
<SpamapS> :)
<hazmat> which has a team of 40 devs to support it ;-)
<SpamapS> er, I think its closer to 4
<hazmat> oh.. gustavo mentioned they where hiring like crazy for it
<SpamapS> hiring and having are two different things. :)
<SpamapS> They've only just recently starting treating CEPH as more than an experiment
<SpamapS> Honestly, with storage, I'm not sure super automatic elasticity is all that awesome of an idea.
<SpamapS> if your CPU bound thing has a problem coming up or down, oh noes, it goes slower
<SpamapS> if your I/O bound thing loses data... you are screwed
<SpamapS> So I may just make the ceph charm deploy ceph and build the config file, but let admins do the work of adding/removing nodes
<hazmat> ah.. charm world doesn't pick it up because its not a trunk branch
<hazmat> we'd have to introduce an extra namespace layer to allow for deploying charm branches
<SpamapS> no don't do that ;)
<SpamapS> I forgot I haven't promulgated it yet
<SpamapS> Because its changing a lot
<m_3> SpamapS: do you have an environments.yaml entry from openstack?  i.e., wanna see the s3-uri for the nova objectstore
<SpamapS> s3-uri: http://x.x.x.x:3333
<hazmat> m_3, the one in my home dir should be fine
<m_3> SpamapS: does it require additional path like /services/Eucalyptus does?
<hazmat> on the gateway
<SpamapS> no path specified at all in mine
<m_3> gotcha... cool... just checking we're trying to call the right thing
<m_3> it
<hazmat> m_3, its a credential problem
<m_3> hazmat: right
<SpamapS> mo credentials, mo problems
<SpamapS> man, I need a giant RAM disk to do local deploys on
<m_3> SpamapS: dude, local deploys rock!
<SpamapS> Yeah, but now that I don't have to wait for amazon..
<m_3> but yeah, dualcore/8G on the laptop doesn't cut it
<SpamapS> I have to wait for my local disk
<m_3> I know
<m_3> it's always something
<SpamapS> If I had 8G I could make 4G for deploys :)
<m_3> wanna replace my cd-rom with SSD
<SpamapS> I've been shopping for SSD's for that very reason.
<m_3> there's a kit for that in the mbp
<SpamapS> Maybe I should enable write caching on my laptop disk
<SpamapS> that actually helps quite a bit. :)
<SpamapS> just have to remember to turn it off after deploy ;)
<m_3> http://virt1.wikimedia.org:3333/
<m_3> shows that we can create the bucket at least
<hazmat> SpamapS, it doesn't take that long for local deploys.. its mostly just the package install, ssd make it rock... i'm still waiting for a native 7mm ssd to come on the market
<SpamapS> hazmat: but then you're shortening your SSD's life
<hazmat> SpamapS, not concerned, i got it to use it ;-)
<m_3> http://pastebin.com/EUaafbPA
<SpamapS> Hrm.. I dunno. at $500 for a big one.. I want to use it for more than a year. :-P
<m_3> grabbing food for a sec
 * SpamapS is now wondering if his assumptions of endurance are false tho
<hazmat> SpamapS, the wear level is pretty good on the devices, and the sandforce controllers are pretty awesome about dedup
<SpamapS> ok, so most can sustain 20GB/day for 5 years
 * SpamapS is ordering now.. F'it
<hazmat> SpamapS, just make sure your compatible size wise with your laptop
<SpamapS> ta
<Ryan_Lane> hmm. the only thing that I see so far that'll deliver a 403 is if the object already exists
<SpamapS> hazmat: yeah I've been looking into it
<Ryan_Lane> yeah. it'll only give a 403 if the file or directory exists
<Ryan_Lane> did you guys do a test write into the file you need to create with juju?
<Ryan_Lane> err. object
<Ryan_Lane> odd. it's giving a 405...
<Ryan_Lane> I don't even see that in the code
<hazmat> Ryan_Lane, i'm still getting errors on auth.. 405 might be coming from pylons
<SpamapS> hazmat: btw, ceph is now the second time where I have 'relation-set' the base64 of a file on disk.. I wonder if we can't get a 'relation-file /etc/hosts' to make sharing files easier
<Ryan_Lane> hazmat: errors on auth where?
<Ryan_Lane> to nova?
<hazmat> Ryan_Lane, to s3server
<Ryan_Lane> ah. right
<Ryan_Lane> yeah
 * hazmat checks if novarc has changed
<Ryan_Lane> oh. wait
<Ryan_Lane> let me fix your secret and access keys
 * m_3 looks for variation of s3cmd that'll work with nova object-store
<hazmat> Ryan_Lane, cool thanks
<SpamapS> m_3: it works fine but you have to skip verifying your settings and manually edit the config
<SpamapS> m_3: ~/.s3cfg .. its obvious where to change the hostnames
<m_3> SpamapS: thanks
<SpamapS> m_3: note that txaws comes with some handy commands for using S3
<Ryan_Lane> hazmat: heh. seems your keys are fine
<Ryan_Lane> pylons giving back 405?
<hazmat> hmm
<Ryan_Lane> I dunno what that is
<hazmat> http://pastebin.com/8PrAvyCf
 * hazmat tries with s3cmd
<hazmat> m_3, its a hidden option only in the config file of s3cmd
<hazmat> Ryan_Lane, it seems to be fine manually... i'll dig into debugging it
<hazmat> oh
<hazmat> that's the problem
<Ryan_Lane> ?
<hazmat> me and m_3 are probably using the same bucket ;-)
<m_3> oh no way
<hazmat> hmm
<hazmat> nope that's not it
 * hazmat goes back to drawing board
<Ryan_Lane> it's possible this is missing function calls you need
<Ryan_Lane> this is the super simple, kind of shitty object store
<SpamapS> we're using that one in canonistack
<Ryan_Lane> it seems it doesn't even have authentication
<SpamapS> Its shittiness is worked around in txaws and juju quite a bit ;)
<Ryan_Lane> heh
<SpamapS> it should have auth
<SpamapS> its used for image uploads
<Ryan_Lane> glance is used for image uploads
<Ryan_Lane> and glance doesn't have auth either ;)
<SpamapS> Hmmmmm.. right.. I thought somebody told me that this was used to facilitate those image uploads
<Ryan_Lane> we are using cactus
<SpamapS> Oh
<SpamapS> snap
<SpamapS> and anything works?
<SpamapS> you know Diablo's out.. ;)
<Ryan_Lane> cactus is perfectly stable for us ;)
<SpamapS> We had to fix quite a few diablo bugs for juju to work
<Ryan_Lane> I wasn't comfortable upgrading a few days before the hackathon ;)
<SpamapS> Yeah
<SpamapS> its non-trivial
<SpamapS> Big component of the essex design summit was how to do upgrades
<m_3> vast understatement?
<Ryan_Lane> the objectstore hasn't changed at all I believe
<Ryan_Lane> it is deprecated
<hazmat> ah.. its cactus
<SpamapS> hazmat: I wonder if the nova bugs with groups were all introduced during diablo
<m_3> swift on an instance and we can just use that s3-uri?
<SpamapS> if they were in cactus too..  there will be problems. :p
<Ryan_Lane> bugs with groups?
<SpamapS> Yeah nova couldn't handle the group management that juju does for firewall management
<Ryan_Lane> ah
<Ryan_Lane> security groups you mean?
<SpamapS> instances would fail to start because of at least 1 bug
<SpamapS> yeah
<Ryan_Lane> ah
<Ryan_Lane> I haven't seen any security group issues yet
<Ryan_Lane> they may have been introduced in diablo
<hazmat> Ryan_Lane, juju create's a security group that allows intra group traffic, it broke diablo for a while
<hazmat> Ryan_Lane, there are a couple of bugs fixed wrt to juju usage in the diablo release
<Ryan_Lane> ah. ok
<hazmat> i'm going to see how far i get with it
 * Ryan_Lane nods
<_mup_> Bug #875903 was filed: Zookeeper errors in local provider cause strange status view and possibly broken topology <juju:New> < https://launchpad.net/bugs/875903 >
<hazmat> so the 401 unauthorized is actually from trying to describe the security groups
<hazmat> SpamapS, re that bug, did you hibernate?
<hazmat> SpamapS, the session expiration is fatal atm, and a hibernate will trigger it, since there isn't any heartbeat and then the clock advances
<hazmat> status should be verifying against the agent presence nodes
<SpamapS> hazmat: no, just been fiddling with it for 1 or 2 hours straight
<hazmat> else its just reporting against recorded state
<SpamapS> hazmat: I saw this one before too
<SpamapS> hazmat: I bootstrapped quite recently
<SpamapS> anyway, time for weekend stuff
<hazmat> SpamapS, cheers
<m_3> SpamapS: later man... thanks!
<m_3> hazmat: euca-describe-groups is authorized from the cli
<hazmat> m_3, yeah.. saw that
<m_3> could also euca-authorize -P tcp -p 22 -s 0.0.0.0/0 junk-group
<backburner> will juju work with openstack ?
<m_3> backburner: yes, it's been tested with diablo I think
<enmand_> There are nova-cloud-controller and nova-compute charms, I believe
<enmand_> I haven
<enmand_> I haven't been able to find information or documentation on deplying Ubuntu Cloud Infrastructure and OpenStack yet though
<enmand_> In Oneiric, I mean
<m_3> enmand_: juju can deploy openstack using those charms... it can also deploy services _on_ openstack
<enmand_> m_3, yeah, I found the openstack charms and all, and I read some of the deploying on OpenStack stuff
#juju 2013-10-07
<AskUbuntu> How to get a juju charm's parameters | http://askubuntu.com/q/354747
<jamespage> jcastro, marcoceppi: do you know if its possible to make the juju-gui display icons for locally launched charms?
<jamespage> trying to make my demo for ceph day on Wednesday look good
<rick_h_> jamespage: no, it's hard coded logic that only the promulgated charms in the store get their icons displayed
<jamespage> rick_h_, anyway I can hack that locally?
<rick_h_> jamespage: the only way to force that would be to run a local charm store and ingest your local charms, force them to be promulgated in the charmworld db
<rick_h_> jamespage: might file a bug for demo purposes of adding a feature flag that's "display all icons" since the logic is in the client side code of the gui
<rick_h_> I *think* we could still do that, but not 100% sure off the top of my head
<rick_h_> jamespage: but even then, we don't have access to the icon files for locally deployed charms. Juju doesn't send that data to the gui
<rick_h_> jamespage: so they still have to be in a running charmworld instance, they just don't have to be promulgated at that point
<jamespage> rick_h_, ah - I see
<marcoceppi> rick_h_: is there any documentation on "setting up your own charmworld"?
<rick_h_> marcoceppi: there's a charmworld charm?
<rick_h_> marcoceppi: with a readme on setting it up?
<rick_h_> marcoceppi: there's also the docs in the charmworld source tree for hacking purposes. Mirrored to RTD http://charmworld.readthedocs.org/en/latest/
<marcoceppi> rick_h_: thanks!
<marcoceppi> rick_h_: wait, charmworld != charmstore, is it?
<rick_h_> marcoceppi: no, not really. People refer to it that way sometimes. charmstore is a confusing mess
<rick_h_> charmworld == manage.jujucharms.com which ingests from LP + juju-core charm store
<marcoceppi> rick_h_: yeah, so there are no real docs on running your own charm store are there?
<rick_h_> marcoceppi: the juju-core one? no idea. Never thought to try it or look
<marcoceppi> rick_h_: there's this in the docs, which makes me think you_can_ but I've not seen any way how to https://juju.ubuntu.com/docs/charms-deploying.html
<marcoceppi> rick_h_: under "changing the defaults"
<rick_h_> interesting
 * marcoceppi rumages around
<jcastro> man, the incoming queue is crushing us
<jcastro> marcoceppi: paul c submitted sensu server and agent!
<marcoceppi> jcastro: I've got time today to tend to the queue, since we're post-release for stuff
<jcastro> marcoceppi: can you do logstash/kibana first?
<jcastro> then the sensu stuff?
<marcoceppi> jcastro: ack
<jcastro> arosales: out of curiosity I brought up our planning/BP problem to jono as we were talking on Friday
<jcastro> and I tossed out "we could just toss everything out and start from scratch"
<jcastro> and he heavily +1ed
<jcastro> so, that means I don't have baggage if you don't want to for this next cycle
<jamespage> jcastro, sorry - I managed todo zero review last week
<jamespage> work is a bit crazy right now
<jamespage> I owe dholbach the same apology
<adeuring> marcoceppi: could you have a look at my MP?
<marcoceppi> adeuring: yeah, can do
<adeuring> marcoceppi: thanks!
<sinzui> charmers, a new manage.charmworld.com is being built on gojuju. We are dumping the db to get a copy of featured and qa collections. Any changes you make between now and probably tomorrow will be lost. Do you need to feature any charms or QA any charms in the next 24 hours?
<rick_h_> marcoceppi: ^^ since you were talking about hitting the queue
<marcoceppi> sinzui rick_h_ we won't be doing any features, this won't affect charm promulgation, etc, correct?
<sinzui> correct marcoceppi
<rick_h_> marcoceppi: no, ingest should catch/keep up with that fine.
<jcastro> jamespage: yeah, I should have not scheduled you on review so close to release, that was my bad.
<sconklin> I'm unable to bootstrap the juju environment on my raring maas server. Best I can tell from searching, it's because all my nodes are "allocated to root", and not "ready". How can I return them to ready status?
<adam_g> anyone aware of any common issues wrt ssh key auth not working for local containers?
<rick_h_> adam_g: yea, the username is ubuntu and juju ssh doesn't seem to work. A manual ssh ubuntu@ip.addr.x.y will work
<adam_g> rick_h_, yeah, still no luck tho
<rick_h_> adam_g: oh, in that case no. Not seen that
<kurt_> jamespage: in thinking through further about our discussion a few weeks ago about consolidation of charms.  Is it unwise to colocate the quantum-gateway with any other charms?
<kurt_> Here is the layout I'm think of.  If anyone else has comments on why they think it wouldn't work or a better way to consolidate, please chime in.
<kurt_> http://pastebin.ubuntu.com/6206435/
<kurt_> let me repast that in to pastebin
<kurt_> There we go: http://pastebin.ubuntu.com/6206449/
<kurt_> Comments anyone? :)
<_mup_> Bug #1236590 was filed: juju destroy-machine leaves orphaned security groups <juju:New> <https://launchpad.net/bugs/1236590>
<jamespage> kurt_, hey
<kurt_> jamespage: hi - I think I can consolidate even further
<jamespage> kurt_, most likely; I've just re-deployed one of our internal test environments using MAAS and the LXC containers feature in 1.14.1
<kurt_> jamespage: I've not played with lxc yet, but have gotten fair without needing it.
<kurt_> as long as I stick to the rules you laid out before
<jamespage> machine 0 runs pretty much everything that can be containerized; mysql, rabbit, cinder, glance, nova-cloud-controller, swift-proxy and keystone
<jamespage> with quantum-gateway running alongside the juju bootstrap node on the bare metal
<kurt_> but cinder and glance will conflict, right?
<kurt_> outside of a container
<jamespage> kurt_, not under LXC - all services have their own filesystem and network namespaces
<kurt_> and without lxc? there's a problem, right?
<jamespage> oh - and the dashboard (under lxc that is)
<jamespage> kurt_, yup
<_mup_> Bug #1236598 was filed: Machine stuck in juju status if the machine doesn't start <juju:New> <https://launchpad.net/bugs/1236598>
<kurt_> ok, if I first want to try this out without, give me a sec and I will paste bin you my proposed layout
<kurt_> jamespage: http://pastebin.ubuntu.com/6206818/
<kurt_> the only thing I really have concerns about is co-locating quantum-gateway on cloud-controller
<kurt_> and remember this is really all on VMs
<kurt_> (not that that matters)
<jamespage> kurt_, just to give you an idea of what I am doing - http://paste.ubuntu.com/6206832/
<jamespage> kurt_, the quantum gateway writes /etc/nova/nova.conf so will conflict with the cloud controller charm
<kurt_> Ok, so is there a good candidate otherwise, or should it go to its own node?
<jamespage> kurt_, if you are not using containers - its own node
<jamespage> kurt_, but adding containers is easy
<jamespage> juju add-machine lxc:0
<jamespage> adds a new lxc container to machine 0
<jamespage> which you can then "juju deploy --to 0/lxc/0 mysql"
<kurt_> nice.  But there are still some deployment issues for containers, right?
<kurt_> My strategy was to get everything working on regular VMs first, then dive in to containers.  :)
<kurt_> I'm about half way through blogging it all.
<kurt_> jamespage: thanks for the intro on containers.  I will check it out.  I appreciate the feedback.
<jamespage> kurt_, with maas containers are OK
<jamespage> should get even better with 1.16.1
<adam_g> anyone know the correct way to inspect logs /w juju 1.15.0.1? i apparently missed the memo
<jamespage> adam_g, urgh
<jamespage> adam_g, no idea on that one
<jamespage> adam_g, but I just figured out why juju-core does not like talking to the compute api from within serverstack
<jamespage> its not dealing with packet fragmentation well
<jamespage> ip link set eth0 mtu 1546
<jamespage> and everything comes alive again!
<adam_g> jamespage, hmph
<sarnold> kurt_: oh cool, where can I find the blog post(s?) when you're done? I want to be better at juju and it'd be nice to learn from your experience -- you've put in a ton of work :)
<kurt_> sarnold: sure.  I'm restructuring it now.  I will welcome feedback from everyone when it's ready.
<jamespage> adam_g, I think I might need to work in the bits and pieces to drop the mtu in instances using dnsmasq on the gateway nodes
<sarnold> kurt_: thanks :D I'm looking forward to it
<kurt_> cheers.  It's taken a lot of effort.
<adam_g> jamespage, https://lists.ubuntu.com/archives/juju/2013-September/002998.html FYI
<jamespage> sinzui, fyi I just upgraded 1.14.1 running agents to 1.15.1 OK on our openstack deployment
<sinzui> oh goody
<sinzui> jamespage, was the tools-url: https://swift.canonistack.canonical.com/v1/AUTH_526ad877f3e3464589dc1145dfeaac60/juju-dist/tools
<jamespage> sinzui, well it was not canonistack
<sinzui> okay
<jamespage> http://10.98.191.34:8080/v1/AUTH_79699f6f71e245b186720f1e2bc03cf0/juju-dist/tools
<jamespage> sinzui, but its looks much the same
<sinzui> That does match the pattern I expect
<jamespage> sinzui, upgrade-juju with maas provider not so happy
<sinzui> :(
<jamespage> sinzui, hmm - can't juju status any longer...
<sinzui> oh, that is worse
<sinzui> I have waited 2 hours after a call to upgrade and I still had access using old and new juju
<jamespage> sinzui, hmm - looks like the old agent is looking for the new tools in tools/
<jamespage> rather than consuming simplestreams
<jamespage> that might be my bad
<sinzui> jamespage, I think that is correct. 1.14.1 does not know about streams
<sinzui> jamespage, Though I wondered if the upgraded bootstrap agent pointed the unit agents where to find the new juju. Since I saw the bootstrap upgrade from tools/, but not the units, maybe the units went to a different location.
<jamespage> sinzui, I can upgrade the bootstrap either with maas
<jamespage> sinzui, can't figure out how to get 1.15.1 into the right location
<jamespage> if I sync with 1.14.1 it ignores the 1.15.1 tarballs
<jamespage> if I do it with 1.15.1 it just pushed tools/releases and tools/streams
<sinzui> yeah, the updated release-public-tools does a lot of fixing up to support old and new
<sinzui> jamespage, This is what I have been doing to collect, extract, and organise a tree that can be synced: http://pastebin.ubuntu.com/6206992/
<sinzui> ^ This is also why I wonder if I am doing something wrong.
<jamespage> sinzui, well I can do that with openstack OK as I just push the tree into swift
<jamespage> sinzui, but for maas I have to use sync-tools
<sinzui> oh?
<sinzui> We will need some legacy support then in sync-tools I think
<jamespage> I need that fixed for 1.16.1
<jamespage> I need that fixed for 1.16.0 rather
#juju 2013-10-08
<omgponies> what's the format for deploying a charm that's not in the official charm store but is in a bzr repo ?
<davecheney> omgponies: local ?
<omgponies> without having to grab it local
<omgponies> looks like this works - juju deploy cs:~paulcz/precise/elasticsearch
<davecheney> yup, that is the format for a private charm store branch
<davecheney> thing
<omgponies> is there a flag for setting a version ... which I assume is the equivalent of the 'revision' file ?
<davecheney> omgponies: version is the value of the revision file in the root of your charm
<davecheney> or 1, by default
<omgponies> right,  I mean during deployment
<omgponies> for instance 'revision' for the charm above is currently 50
<omgponies> can I specify that so if some jerk does a breaking change I don't find out by surprise
<davecheney> omgponies: make a file called revision in the root of your charm
<davecheney> put the numer 50 in ther
<davecheney> oh, hang on
<davecheney> i see what you are asking
<davecheney>  cs:~paulcz/precise/elasticsearch:$REVISION
<davecheney> is the full name
<davecheney> best to commit a revision file
<davecheney> hm
<davecheney> actualy
<davecheney> no
<davecheney> that may
<davecheney> probably not work
<davecheney> i don't think private charmstore branches have a concept of revision as strong as real charms do
<davecheney> there is only one revision of the charm
<davecheney> and that is head
<omgponies> right
<omgponies> is there a doc somewhere that describes everything that is available from the 'unit-get' command ?
<davecheney> omgponies: not really
<davecheney> from memory
<davecheney> public-address and private-address are the only useful ones
<davecheney> yup, sauce says those are the only two commands
<omgponies> thinking about from a monitoring perspective ... be able to get a list of units/services deployed to a box and a) set a useful hostname, b) determine what metrics to care about
<davecheney> omgponies: unit-get probably isn't going to be what you want
<omgponies> yeah I don't think there's any good way to get what I want
<omgponies> probably need something to stick in the middle that can correlate 'ip-10-29-206-28' to data from `juju status`
<gnuoy> I don't have a public bucket for juju tools and use bootstrap --upload-tools to get them into a new environment. I've upgraded my client to 1.14 and now I want to upgrade the juju tools in existing environments. How do I do that ? I don't see an --upload-tools option to sync-tools
<gnuoy> I do see --source for sync-tools and the comment suggests I can specify a local dir but I'm unable to spot the tools on the filesystem
<gnuoy> ok, facepalm. I didn't know about: juju upgrade-juju --upload-tools
<allenap> When talking about containers in https://juju.ubuntu.com/docs/authors-subordinate-services.html, is that a general term? Or is it referring to LXC, for example?
<evilnickveitch> allenap, I believe it is used in a general sense. That's another page that needs a good rewrite...
<allenap> evilnickveitch: Cool, thanks.
<_mup_> Bug #1236824 was filed: boostrap tries to build jujud <juju:New> <https://launchpad.net/bugs/1236824>
<_mup_> Bug #1236900 was filed: tar: unrecognized option ''--numeric-uid'' <juju:New> <https://launchpad.net/bugs/1236900>
<drj11> hello
<drj11> we have changed our config.yaml for a service to add a new configuration option. How do we add that configuration option to an already running service?
<marcoceppi_> drj11: you'll need to upgrade the charm
<drj11> marcoceppi_: thanks. I thought so. and I thought we'd tried that. me and morty are working on it. we'll try again
<drj11> marcoceppi_: thanks again
<marcoceppi_> drj11: was the charm initially deployed from the charm store or from local?
<marcoceppi_> adeuring: I've got a few comments on your merge for charm-tools
<adeuring> marcoceppi_: thanks, I'll look
<marcoceppi_> adeuring: You're trying to find "maintainer" of the branch, but to be honest the owner of the branch is always going to be ~charmers for  promulgated branches (or ~team)
<marcoceppi_> adeuring: we don't do stacking anymore for promulgated branches
<drj11> marcoceppi_: from local
<drj11> marcoceppi_: we never use the charm store
<marcoceppi_> drj11: gotchya, that shoudl work. You should be able to verify that the charm revision is bumped in the juju status
<marcoceppi_> adeuring: so if a maintainer isn't in ~charmers (which is an expected case) then your function will fail
<adeuring> marcoceppi_: ok, so we might need a special rule for branches owned by charmers. But what if somebody deliberately want to fork a promulgated charm? In that case, this person should change the maintiner field. Otherwise, the official maintainers might receive undeserved "hate mail".
<marcoceppi_> adeuring: It's an interesting case
<marcoceppi_> I think an exception will need to be made for ~chamers for sure
<marcoceppi_> adeuring: also, things like the the juju gui charm are maintained by "Juju GUI Team" not sure how this would handle that
<adeuring> marcoceppi_: let me look how this works with today data.
<adeuring> marcoceppi_: the GUI is acutally a good example: Sending a mail to the address given in the maintainer field (juju-gui@lists.launchpad.net) results in an error: "host polevik.canonical.com [91.189.95.64]: 550 unknown user". OTOH, the "real" mailing list (juju-gui@ubuntu.com) can't be checked either... Anyway, I#m open to sugegstion how else to check the sanity of the maintainer field.
<adeuring> ah, "juju-gui@lists.launchpad.net" would have worked, if the juju-gui team had set up this list on LP
<marcoceppi_> adeuring: I think just making sure it follows "Full Name <properly-formated@email.tld>" would suffice. We don't take much responsibility for  personal branches and these should be checked during finaly charm review
<adeuring> marcoceppi_: ok, I believe the check you suggest already exists, so let's abandon the MP
<marcoceppi_> adeuring: I like the moving of the version to the package
<adeuring> marcoceppi_: yeah, that makes things easier for some changes, but taht's easy to include in as a drive-by fix in any real branch.
<marcoceppi_> adeuring: I'll see how easy it is to just cherry pick that commit
<adeuring> marcoceppi_: ok, let me just revert the other changes, that's the most easy way, I beleive.
<adeuring> marcoceppi_: done
<matsubara> hi there, I bootstrapped a juju environment on openstack, then juju destroyed it. When I try to bootstrap again, juju says there's already a environment bootstrapped. Any ideas on how to fix this? Logs here: http://pastebin.ubuntu.com/6210041/
<kurt_> Hi Guys - I cannot get maas-tags to work with 14.1.  According to the constraints documentation, it should.  Any comments?
<kurt_> http://pastebin.ubuntu.com/6210186/
<marcoceppi_> 1.14.1 doesn't have maas-tag support
<marcoceppi_> It was just added in 1.15.1, https://lists.ubuntu.com/archives/juju/2013-October/003019.html
<marcoceppi_> kurt_: ^
<kurt_> marcoceppi_: is 1.15.1 support on precise?
<marcoceppi_> kurt_: all juju releases are available for all current supported ubuntu releases
<kurt_> Ok, guess its time to upgrade :)
<CheeseBurg> no weekly update today?
<marcoceppi_> kurt_: however, odd series, like 1.13, 1.15, etc are considered "devel" releases
<marcoceppi_> so you need to get them from ppa:juju/devel
<kurt_> marcoceppi_: Ok, thanks.  I may have it already.
<kurt_> marcoceppi_: in reading the readme for 1.15.1, I'm confused by this statement: "As an unstable release we do not make guarantees about clean upgrade
<kurt_> paths of running environments from one 1.13.x version to another."
<marcoceppi_> kurt_: 1.<ODD>.X releases are development releases, they are not considered stable
<marcoceppi_> For 1.<EVEN>.X repleases you should be able to safely run `juju upgrade-juju` to upgrade from one stable juju release to another
<kurt_> marcoceppi_: right, but I am in fact going from 1.13.1 to 1.15.1.  The statement appears to guarantee it will break. :)
<marcoceppi_> kurt_: it shouldn't, it's just saying it might
<marcoceppi_> once 1.16 comes out I recommend you sit on that release and move between 1.<even> releases
<kurt_> marcoceppi_: ok, we'll see soon enough, I have it installed ;).  And yep - how far is 1.16 out?
<marcoceppi_> kurt_: it should be released sometime this month
<marcoceppi_> iirc
<kurt_> cool, thanks
<matsubara> Does juju keep any local state of a bootstrapped environment? I keep getting an error:  ERROR juju supercommand.go:282 environment is already bootstrapped even though I destroyed the whole environment.http://pastebin.ubuntu.com/6210041/
<marcoceppi_> matsubara: is this a local environment?
<matsubara> marcoceppi_, canonistack
<matsubara> marcoceppi_, btw, I updated juju-core to 1.15.1 and now when I destroy-environment, the command returns but doesn't seem to destroy anything (i.e. If I run the command again, I get the question if I want to destroy the env)
<mhall119> jcastro: marcoceppi_: halp!
<marcoceppi_> mhall119: what's up?
<mhall119> I'm trying to make my config-changed charm call my db-relation-changed charm to re-acquire my DB credentials
<mhall119> http://bazaar.launchpad.net/~api-website-devs/ubuntu-api-website/api-website-canonical-is-charm/view/head:/hooks/config-changed#L49
<mhall119> but according to the charm log, that's failing: https://pastebin.canonical.com/98709/
<marcoceppi_> mhall119: yeah, because relation hooks get extra environment variables
<mhall119> marcoceppi_: so how can I make this work?
<marcoceppi_> you can't really call relation-get out-of-band without supplying a relation-id
<marcoceppi_> mhall119: what I've done in charms is record the data in a dot file in the $CHARM_DIR and read that in other hooks
<mhall119> marcoceppi_: line 48 of the config-changed hook gets a relation-id
<marcoceppi_> mhall119: you need to record the $JUJU_RELATION_ID from the db-relation-changed hook
<mhall119> marcoceppi_: FWIW, this is derived from the internal certificaiton website charm
<mhall119> marcoceppi_: $(relation-ids db) won't work?
<mhall119> or is it that I need to set that to a named env var in the db-relation-changed hook before calling relation-get?
<marcoceppi_> mhall119: db-relation-changed, when fired during a relation event, will already have the proper JUJU_RELATION_ID set
<marcoceppi_> one second
<mhall119> ok, so if I fire it from config-changed I need to set JUJU_RELATION_ID=$DID when I call db-relation-changed?
<marcoceppi_> mhall119: yes
<mhall119> instead of as $1
<marcoceppi_> $1 ?
 * marcoceppi_ checks code
<mhall119> it was passing the relation id from $(relation-ids db) as the first argument to db-relation-changed when it called it
<mhall119> https://pastebin.canonical.com/98709/
<mhall119> I mean
<mhall119>     DID=$(relation-ids db)
<mhall119>     [ -n "$DID" ] && hooks/db-relation-changed $DID
<marcoceppi_> right, set it as an environment variable before executing the hook
<mhall119> ok
<mhall119> and is that the only one that needs to be set?
<marcoceppi_> db-relation-changed, unless programmed to, won't know what to do with $2
<marcoceppi_> $1
<marcoceppi_> mhall119: you typically need JUJU_REMOTE_UNIT
<marcoceppi_> but I think you can get away without it
<marcoceppi_> I'm waiting for my desktop to come back online to double check
<mhall119> ok
<mhall119> marcoceppi_: could it have ever worked passing it as the first parameter?  Like I said, this was taken from one of the IS charms
<marcoceppi_> mhall119: depends on how the db-relation-changed hook is set up
<marcoceppi_> mhall119: doesn't look like it
<mhall119> ah, ok, I see how it was being used before
<mhall119> marcoceppi_: on, new question
<mhall119> suppose I have a config parameter called bzr_revno, and in config.yaml for my charm I have it default to 1
<mhall119> then, I update my app's code and I update the charm's config to make bzr_revno default to 2
<mhall119> now I just found out that this doesn't actually call config-changed hook
<mhall119> is there *any* hook that would react to a change in the default config value?
<marcoceppi_> mhall119: hooks/upgrade-charm - but that's only reacting to an upgrade-charm event. What I recommend you do is call hooks/config-changed from the upgrade-charm hook
<mhall119> and then when IS deploys a new version, upgrade-charm will be called?
<marcoceppi_> mhall119: I don't know how IS does it, but if they use upgrade-charm, then yes, hooks/upgrade-charm will run
<mhall119> ok, I'll check
<mhall119> oh, hey, hooks/upgrade-charm is a symlink to hooks/config-changed
<mhall119> so it should be called anyway
<marcoceppi_> :)
<mhall119> but it wasn't...
<mhall119> and I'm not sure why
<marcoceppi_> check the logs?
<mhall119> https://pastebin.canonical.com/98709/
<marcoceppi_> also, run juju get api-website
<marcoceppi_> see what juju thinks the value/defaults are
<marcoceppi_> mhall119: there might be a thing where juju doesnt' update default values for configs when charms are upgraded
<marcoceppi_> I could see that being the case
<mhall119> so you'd have to still call uju set api-website-app bzr_revno=37
<marcoceppi_> mhall119: if what I described above is true, then yes
<mhall119> ok
<mhall119> between that and needing to set JUJU_RELATION_ID, I think that explains the failure I was having
<mhall119> thanks marcoceppi_
<marcoceppi_> mhall119: np, let me know if you bump in to any other oddities
<mhall119> don't worry, I will :)
<kurt_> marcoceppi_: odd things going on with using maas-tags and deploying services http://pastebin.ubuntu.com/6210935/
<kurt_> my tag matches an existing the host, the one the bootstrap node is on, but juju still wants to spin up a second node
<jamespage> kurt_, specifying a tag does not force two services to deploy onto the same machine
<jamespage> kurt_, you have to use --to <machineid> todo that
<kurt_> jamespage: but if those hosts are not yet known to juju, how do you accomplish deploying services to those nodes?
<jamespage> kurt_, deploy one service first, and then deploy using --to for subseqent services
<kurt_> jamespage: how is handling moving on to the next node done then?
<jamespage> by co-locating services you are specifically telling juju that you are taking charge of where stuff lands
<jamespage> to add a new node just don't specify --to
<kurt_> right.  but if I want specific services to land on specific nodes, and they are unknown to juju, this sounds like a problem
<kurt_> I get that I use --to.  But when I need the next set of services on the the next node...
<kurt_> how do I force juju to use a particular node? I thought that's what the purpose of tagging was
<kurt_> tagging + constaints
<jamespage> kurt_, tagging is just a way of grouping servers
<jamespage> kurt_, juju will ask maas for a new unit which matches the provided tag
<kurt_> ok, but it doesn't appear that was happening
<jamespage> kurt_, thats not what it sounded like above "juju still wants to spin up a second node"
<jamespage> thats the correct result
<kurt_> jamespage: so if the node is already running that matches the tag, juju will look to deploy elsewhere without the "--to" function?
<jamespage> kurt_, yes
<kurt_> that doesn't seem intuitive
<jamespage> the node is already allocated to a service
<jamespage> so juju won't by default place another service on it
<jamespage> think about a deployment with 1000 nodes
<jamespage> where 400 are for storage and 600 are for compute
<jamespage> I can assign tags for 'compute' and 'storage' to the different server types
<jamespage> and then use juju to target nova-compute to 'maas-tag=compute' and ceph to 'maas-tag=storage'
<jamespage> or servers could be tagged per physical availability zone, or switch or whatever
<kurt_> Ok, when we talk about more servers in the tag group than a single server it makes sense
<kurt_> but in the context of a single node, it didn't
<jamespage> tags don't really make sense in that context
<kurt_> ok. makes sense.  I am really looking for the one-size fits all hammer for juju.  It just doesn't exist.
<kurt_> my "--force-to" option :D
<omgponies> is there a way to get the unit name from inside a hook
<omgponies> i.e. from juju status where I see  'services:\n  elasticsearch\n ... units: elasticsearch/0
#juju 2013-10-09
<kurt_> marcoceppi_: seeing this error when trying to deploy charm "Requested array, got <nil>."  http://pastebin.ubuntu.com/6211685/
<davecheney> omgponies: $UNIT_NAME
<davecheney> from memory
<davecheney> will check
<davecheney> but essentially it'll be an environment variable
<davecheney> as I understand it, the http relation interface defines two properties, host and port
<davecheney> is this correct (Y/N) ?
<styles_> How did you guys like the switch to GoLang?
<Kabiigon> hi im am have a lxc issue with a juju install
<Kabiigon> lxc fails to start
<Kabiigon> fresh install of precise
<Kabiigon> serror: net: no such interface
<rsynnest> hey quick random question: i have 12.04 lts and i just installed juju locally with "sudo apt-get install juju-local" before realizing i have LTS and need to install using sudo apt-get install "juju-local linux-image-generic-lts-raring linux-headers-generic-lts-raring" so just wondering if i need to remove anything from that first apt-get?
<rsynnest> I'm brand new to IRC as well...
<rsynnest> not sure if i can just throw questions out there....
<marcoceppi_> kurt_: I think it's safe to ignore, unless the gui didnt' start properly
<kurt_> marcoceppi_: I am running in to this: https://bugs.launchpad.net/juju-core/+bug/1236734
<_mup_> Bug #1236734: juju 1.15.1 polls maas API continually <juju-core:Fix Committed by gz> <https://launchpad.net/bugs/1236734>
<marcoceppi_> kurt_: ah, gotchya
<kurt_> actually, on closer inspectionâ¦may not be that
<kurt_> sorry, yes it isâ¦wasn't looking at apache log
<kurt_> its listed as critical, so hopefully fix soon
<jcastro> marcoceppi_: hey so mramm and I demoed juju at our lug last night
<jcastro> and I was talking about horizontal scaling
<jcastro> and I did add-unit -n50 as an example
<jcastro> "but I wouldn't do that on my laptop with LXC, that's just insane."
<jcastro> except I hit enter instead of "next slide"
<jcastro> my machine's load is like at 65
<mramm> haha
<jcastro> so tldr, ctrl-c after a juju command does nothing
<jcastro> you're getting -n50 whether you cancelled or not
<marcoceppi_> jamespage: yeah, because that's what's up
<marcoceppi_> err jcastro ^
<_mup_> juju/trunk r624 committed by kapil@canonical.com
<_mup_> merge env-safety-belt
<_mup_> Bug #1057665 was filed: juju destroy-environment is terrifying; please provide an option to neuter it <juju:Fix Committed by hazmat> <juju-core:New> <https://launchpad.net/bugs/1057665>
<jcastro> marcoceppi_, I missed the previous line above your response to jamespage
<hazmat> styles_, there's only one overlap on the devs between the langs on a whole its been nice as a language, and the ecosystem/lib support has come along quite nicely in the last two+ years since the rewrite started.
<marcoceppi_> jcastro: there was none
<hazmat> jcastro, if we can get lxc-clone support with snapshot.. its takes about 0.2s to create a container :-)
<hazmat> and about 7m per each
<jcastro> hazmat, I think the bottleneck is post container
<jcastro> like, all of them doing one apt-get update at once, then updatedb/mlocate stuff, update-apt-xapian-index and a bunch of other IO stuff
<hazmat> jcastro, true.. most of it is cloud-init based config in terms of time till a juju  unit.
<jcastro> http://paste.ubuntu.com/6214069/
<jrwren> option to turn that stuff off?
<kurt_> LOL bug 1057665
<_mup_> Bug #1057665: juju destroy-environment is terrifying; please provide an option to neuter it <juju:Fix Committed by hazmat> <juju-core:New> <https://launchpad.net/bugs/1057665>
<hazmat> kurt_, james does beautiful bug descriptions :-)
<kurt_> that was pretty funny
<kurt_> Can anyone tell me if you need a guinea pig to test a fix for bug 1236734 ? It's killing me. :D
<_mup_> Bug #1236734: juju 1.15.1 polls maas API continually <juju-core:Fix Committed by gz> <https://launchpad.net/bugs/1236734>
<adeuring> marcoceppi_: another MP for charmtools. An one-line change this time
<adeuring> marcoceppi_: https://code.launchpad.net/~adeuring/charm-tools/fix-setup-sdist/+merge/190170
<marcoceppi_> adeuring: cool, I'll sit on it for several weeks :P
<adeuring> ;)
<marcoceppi_> adeuring: this actually can be fixed by just having ez_setup.py added to the manifest
<marcoceppi_> which isn't commited in the repo
<marcoceppi_> :\
<marcoceppi_> Oh, nvm, this is even cleaner
<adeuring> marcoceppi_: sure, that's the other option
<marcoceppi_> adeuring: +1 from me
<adeuring> marcoceppi_: thanks!
<adeuring> marcoceppi_: let's also create a traball for charmworld. We'd like to use the tarball instead of PPAs in charmworld -- makes it easier to maintain the right version.
<adeuring> erm, I mean a tarball of charm-tools.
<marcoceppi_> adeuring: I just did that :)
<marcoceppi_> adeuring: for the brew installer
<adeuring> marcoceppi_: great!
<marcoceppi_> adeuring: https://launchpad.net/charm-tools/stable/1.0
<marcoceppi_> each release will get a tarball going forward, but I might be restructuring the release setup
<marcoceppi_> instead of having "stable", it might be based on major version, so there will be a charm-tools/1.0, charm-tools/1.1 series, etc
<adeuring> marcoceppi_: ...but running setup.py from that tarball raises "ImportError: No module named ez_setup"
<marcoceppi_> adeuring: expect a 1.0.1!
<adeuring> marcoceppi_: cool, even better!
<marcoceppi_> adeuring: I'm re-arranging the charm-tools release layout, the stable series is going away
<marcoceppi_> adeuring: not sure if that breaks anything for you
<marcoceppi_> also, 1.0.1 with patched setup.py coming out
<adeuring> marcoceppi_: no, I don't think so. But I'd appreciate  new tar ball. We want to use one for charmworld. (OK, I could also create a new one, but staying in sync seems to be a bit better.)
<marcoceppi_> adeuring: yeah, 1.0.1 will have that new tarball
<adeuring> marcoceppi_: sorry, was typing without reading: \o/ for the new release
<marcoceppi_> adeuring: https://launchpad.net/charm-tools/1.0/1.0.1/+download/charmtools-1.0.1.tar.gz
<adeuring> marcoceppi_: perfect! thanks a lot!
<marcoceppi_> adeuring: I've moved the branch structure around a bit too, 1.1 is the current active series of development (lp:~charmers/charm-tools/1.1) I'm still trying to figure out how to best manage a project in lp so bare with me
<AskUbuntu> how to stop entire openstack so that i can run maas server on my localhost | http://askubuntu.com/q/355914
<mhall119> marcoceppi_: it doesn't look like setting that env var is working
<marcoceppi_> mhall119: what's the error you're getting?
<marcoceppi_> (is it the same as before)?
<mhall119> 2013-10-09 17:57:19 INFO juju.worker.uniter context.go:234 HOOK error: no relation id specified
<mhall119> marcoceppi_:
<marcoceppi_> mhall119: yeah, fun
<mhall119> marcoceppi_: JUJU_RELATION_ID=db:1
<mhall119> that's what I was prefixing the hook call with
<marcoceppi_> mhall119: well, there are two things wrong with that
<marcoceppi_> aside from it not working properly, you can't just do a static assignment, the relation_id is created when add-relation happens and it's not easily predictable as to what it will be
<mhall119> marcoceppi_: I'm getting it dynamically
<mhall119> DID=$(relation-ids db)
<marcoceppi_> mhall119: ah, cool good
<mhall119> then JUJU_RELATION_ID=$DID
<mhall119> but I logged ${DID} and it was db:1
<marcoceppi_> let me fire up a charm real quick
<mhall119> is "db:1" a valid value for JUJU_RELATION_ID?
<marcoceppi_> yes
<mhall119> marcoceppi_: see line 1847 of http://paste.ubuntu.com/6214819/
<mhall119> that's prinding ${JUJU_RELATION_ID}
<mhall119> in db-relation-changed
<mhall119> but calls in the same script to relation-set and relation-get say no relation id specified, so is having the env variable not enough?
<mhall119> marcoceppi_: ?
<mhall119> jcastro: I think I broke marco :(
<jcastro> hmm?
<marcoceppi_> mhall119: sorry, lunching
<marcoceppi_> mhall119: will be back in a few :)
<mhall119> jcastro: a joke
<mhall119> lunch sounds like a good idea though
<AskUbuntu> how to run openstack and maas server on localhost together | http://askubuntu.com/q/355931
<sinzui> rick_h_, jcsackett Do either of you have any ideas why staging.jujucharms.com is sending me hate mail :https://bugs.launchpad.net/charmworld/+bug/1237588
<_mup_> Bug #1237588: BdbQuit errors <charmworld:Triaged> <https://launchpad.net/bugs/1237588>
<rick_h_> sinzui: sorry, was doing some manual debugging. stagign is having a strange error
<rick_h_> bac: think that's you? ^^ doing pdb in the live gunicorn running server? pdb and gunicorn don't place nice
<sinzui> rick_h_, bac: That sounds much better than having that in trunk. We can mark the bug as a lower priority or invalid if we are certain we cannot release this to production
<bac> rick_h_, sinzui: i have only been doing passive queries and looking at log files.  not i.
<rick_h_> sinzui: yes, this is due to manual monkeying with the source on staging.
<rick_h_> I'll mark invalid and make sure to back out my pdb's from earlier. supervisord must have come along and restarted the gunicorn servers and picked up the changes
<rick_h_> sinzui: reverted changes
<sinzui> Thank you rick_h_
<rick_h_> sinzui: feel free to ignore that come along before it restarts. Sorry for the noise
<sinzui> np. I just couldn't understand why I started seeing it. I had this fear something bad was in production and we wont be able to deploy to in until tomorrow
<mhall119> marcoceppi_: so I've tried padding the relation-id with -r to relation-get and now I get:
<mhall119> 2013-10-09 19:22:51 INFO juju.worker.uniter context.go:234 HOOK error: no unit id specified
<mhall119> :(
<marcoceppi_> mhall119: I was afraid of this
<marcoceppi_> mhall119: so, in addition to -r <RELATION_ID> you need to specify which unit to pull the data from
<marcoceppi_> this is in the format of mysql/0 for example
<marcoceppi_> and is set from JUJU_RELATION_UNIT (I think)
<marcoceppi_> mhall119: there are way easier ways to manage this stuff
<hazmat> marcoceppi_, JUJU_REMOTE_UNIT afaicr
<hazmat> mhall119, what's  your context.. if your in a relation hook all this is preset for you...
<hazmat> ah config-changed
<mhall119> hazmat: I'm trying to re-build my db_settings.py for django when a config change results in a new rev of the django code branc
<mhall119> hazmat: marcoceppi_: I'm going to just copy over the old settings, and let a regular db relation changed updated it when needed
<marcoceppi_> mhall119: the other idea is to just save the database details to a text file in the $CHARM_DIR and have config-changed read that to update the db_settings
<_mup_> Bug #1237626 was filed: constraints should be validated <juju:New> <https://launchpad.net/bugs/1237626>
<mhall119> thanks marcoceppi_, I think I have a charm that'll work now
<mhall119> jcastro: I still have leftover lxc machine dirs after destroy-environment
<jcastro> you will until you upgrade to 1.15.1
<mhall119> ah, ok, still have 1.14.1-saucy-i386
<jcastro> if you could wait another day or so ....
<jcastro> or move to -devel if you want to be risque
<mhall119> I can wait, time to ship my charm of to webops
<_mup_> Bug #1237634 was filed: maas tags support should support full tag syntax <juju:New> <https://launchpad.net/bugs/1237634>
<adam_g> does juju-core support maas-tags as constraints? as outlined in http://maas.ubuntu.com/docs/tags.html ?
<marcoceppi_> adam_g: in 1.15.1 it does
<adam_g> marcoceppi_, ah ya just found that
<adam_g> marcoceppi_, did i read somewhere that the 1.14.1 -> 1.15.1 upgrade is bumpy?
<marcoceppi_> adam_g: there's a bug open about it, have not tried myself
<adam_g> marcoceppi_, have the bug # handy? about to try
<marcoceppi_> adam_g: 1.16 comes out reallll soon if you want to wait for the next stabl
<marcoceppi_> adam_g: https://bugs.launchpad.net/juju-core/+bug/1236622
<marcoceppi_> adam_g: 1.16 has the fix for this
<adam_g> marcoceppi_, thanks
<eagles0513875> fwereade: haha :D now i know where you hid
<eagles0513875> hide
<eagles0513875> mhall119:  :D
<mwhudson> davecheney: hey
<mwhudson> oh
<mwhudson> unping
 * mwhudson hates bash instead of bugging people
#juju 2013-10-10
<hazmat> davecheney, The haproxy charm supports this via explicit configuration as explained in its readme
<hazmat> davecheney, it already extends the interface
<matzie> hi, anyone around to validate my mental model of juju's intentions and capabilities? Fair to describe it as, "like apt-getfor services", orchestrating multiple machines in any cloud (or pseudo-cloud env, like local lxc) and doing whatever is necessary to both provision and configure the individual components of that service and link them togetherâ¦?
<axw> matzie: I think everything after "like apt-get for services" is pretty accurate. Whereas apt-get has automatic dependency resolution, in juju you deploy individual services and connect them together explicitly
<matzie> ok that makes sense
<matzie> so it's up to individual charms how to do the actual work, right? and there's no single prescribed way of doing that?
<axw> matzie: right. juju takes care of provisioning machines, opening up ports etc.; installing charms onto a machine; and giving charms a way of connecting to one another
<axw> how they behave is entirely up to you
<axw> a charm is just a set of executables (scripts, typically) and some metadata
<matzie> ok. I've been working heavily with ansible recently, which has an extensible "inventory" concept that can get data from lots of placesâ¦ wondering whether the output of 'juju status' is generated at each call by effectively working through whatever options are specified (e.g. in environments.yaml) or whether juju has its own 'inventory' under the hood?
<axw> matzie: juju has a mongo database (implementation detail - could change in the future) that it contains information about machines, services, charms, etc.
<axw> matzie: I'm not experience with Ansible, but your description makes it sound like it discovers information about the environment
<axw> <matzie> ok. I've been working heavily with ansible recently, which has an extensible "inventory" concept that can get data from lots of placesâ¦ wondering whether the output of 'juju status' is generated at each call by effectively working through whatever options are specified (e.g. in environments.yaml) or whether juju has its own 'inventory' under the hood?
<axw> <axw> matzie: juju has a mongo database (implementation detail - could change in the future) that it contains information about machines, services, charms, etc.
<axw> <axw> matzie: I'm not experience with Ansible, but your description makes it sound like it discovers information about the environment
<matzie> thanks for your help earlier axw - I'mtravelling with intermittent connection.
<axw> nps
<axw> matzie: (follow on) juju, otoh, knows everything about the environment because it "owns" it; so it has all the knowledge it needs in its mongodb database
<matzie> it can do - an arbitrary script squirts JSON into it defining the hosts it works with and variables about them.  There are scripts that query EC2 for example.
<axw> ah ok
<matzie> probably trivial to parse the output of juju status.  But pleased to hear that I've basically got the idea behind juju, and it's clearly a game changer.  Exploring it further today now my train journey's ending.  Thanks again.
<matzie> ah - that mongodb is on the bootstrapped node?
<axw> matzie: juju status output is either json or yaml (--format=...)
<axw> matzie: yes
<matzie> perfect.  Ok, catch you later...
<matzie> I think I'm affected by https://bugs.launchpad.net/juju-core/+bug/1221868 (EC2/default VPC/ Security groups etc) so switched from using brew to install the client to using go.  Now bootstrapping fails with 'invalid series "unknown"' and setting default-series: precise in my env.yaml doesn't help.  Any suggestions?
<matzie> (everything's fine when I use a different Region where I don't have a VPC)
<matzie> (well - everything *was* fine with the (devel) version I got from brew.)
<axw> matzie: do you mean you're building juju-core from source?
<matzie> just followed instructions under the Mac tab on https://juju.ubuntu.com/docs/getting-started.html - first set uses brew, that worked fine, but once I switched to my normal EC2 region, everything stopped working, which led me to that bug.
<matzie> so I followed the go / bzr route on that same page, which means I now get
<matzie> $>juju --version
<matzie> 1.17.0-unknown-amd64
<axw> ok
<matzie> whereas previously I had 'precise' in there.  Thought specifying the series might make it work, but no luck so far.
<axw> matzie: presumably this env hasn't successfully bootstrapped yet, so can you "rm ~/.juju/environments/<envname>.jenv" and try bootstrapping again?
<matzie> indeed, but I'll try it again to be safe
<axw> there's some recent changes that cache env.yaml attributes in these environment-specific .jenv files
<matzie> (and manually deleting the control bucket / instances / sec groups etc too)
<matzie> pasty: http://paste.pound-python.org/show/mVz6p1u5YdTRAikj7qDc/
<matzie> I think I'll just test with a different region (and the earlier version that brew installs) for now to workaround this
<axw> matzie: the problem, I think, is that trying to bootstrap with unreleased tools won't work on OS X
<axw> because it can't find released tools, it tries and fails to upload *linux* tools locally
<matzie> I see.  experimented with sync-tools command too, but that expects them to - yes, upload them.
<matzie> I'll maybe repeat tests from a linux box then.
<axw> the EC2 bug fix should be in 1.16.0 I think, which should be released in the next day or so
<matzie> great
<TheMue> matzie: just heard you've got probs with juju on mac
<TheMue> matzie: will see how i can help
<matzie> thanks - no hurry though, can workaround
<matzie> can use a linux machine too, just need to get a client with the fix for https://bugs.launchpad.net/juju-core/+bug/1221868 - added ppa:juju/devel but that didn't seem to work.
<matzie> and can also workaround for today (just got a few hours to explore juju) by using a different EC2 region without a VPC
<TheMue> matzie: ok, just read the problem description so far, will now read the pasting
<TheMue> matzie: but yes, the "unkown" series is something we have to handle
<matzie> sure.  everything's fine for now just using a different region, and I hear a new release that incorporates this bug fix is imminent, so no problem.  Thanks for all the help guys
<TheMue> matzie: ah, ic, and even if the built tool would have the correct name for the series in your environment it would have been built on os x. so you would have no luck running it on ec2 (or anywhere else)
<matzie> ah
<mgz> gnuoy: https://bugs.launchpad.net/juju-core/+bug/1089291
<gnuoy> thanks
<TheMue> matzie: yes, the region (eu-west-1) does not have the tools so far. that's why the system tries to build one locally
<TheMue> matzie: and so you get a fine os x "unknown" binary which doesn't help you :/
<matzie> oh so it's the region that's the problem, not just that I have a VPC in that region? I see, so the bug wasn't the issue at all
<TheMue> matzie: that's my first finding, but let me investigate further
<matzie> thx
<TheMue> matzie: in the moment meeting, will continue after it
<noodles785> Wow - I hadn't done any charming for a few weeks, just created a new saucy instance, installed juju and tried the local provider (without refreshing on exactly what was required). End result, prompted clearly for exactly what was missing and bootstrapped in a minute painlessly: http://paste.ubuntu.com/6217921/
<noodles785> Well done to whoever has been working to make that painless :-)
<mgz> jamespage: does ppa:juju-core/golang have arm binaries yet?
<marcoceppi> noodles785: fyi, `sudo apt-get install juju-local` will fix those local deps :) but I'm glad you had a postive experience
<gary_poster> jcastro, marcoceppi marked GUI bug 1237605 invalid.  They are clickable, but AFAWCT the discourse charm does not declare any ports to expose, so JHuju doesn't tell the GUI about any, so we can't make it a link
<_mup_> Bug #1237605: Public URL in the inspector should be clickable <juju-gui:Invalid> <https://launchpad.net/bugs/1237605>
<jcastro> oh, interesting!
<jcastro> I didn't know you linked based on what the charm did, that's actually totally awesome
<marcoceppi> gary_poster: it exposes port 80 via open-port
<gary_poster> marcoceppi, which hook?
<jcastro> huh so if a charm exposes some odd port, then the gui would just handle that and show the user something clickable?
<marcoceppi> gary_poster: https://github.com/marcoceppi/discourse-charm/blob/master/hooks/db-relation-changed
<marcoceppi> gary_poster: wordpress also does this in the db-relation-changed hook
<gary_poster> jcastro, currently, if charm exposes 80 or 443, that will make the main address clickable.  other ports are listed individually.  currently they are also clickable individually, but next release will only do that if connection is tcp, not udp
<jcastro> that is pretty cool
<gary_poster> marcoceppi, ah! jcastro I think your image showed that you had not connected a db yet.  so no charm bug, but the port will not show up until you connect a db
<gary_poster> marcoceppi, might be a confusing pattern
<gary_poster> but that's philosophical, not technical; I'll leave you all to decide
<jcastro> yeah, then that would make sense because at the time I was going to the page to show that it would not be running because there's no db
<marcoceppi> gary_poster: well if you don't connect a db, there's nothing to show. That's the idea behind opening port
<marcoceppi> at least what I would conclude
<jcastro> right, ok so the right answer then to tell users is ... we tell you the address, and where you will go
<jcastro> but we don't like it because you're not done yet
<marcoceppi> jcastro: gary_poster I like that gui does this
<jcastro> link it I mean
<marcoceppi> +1
<jcastro> yeah, this is actually a good feature, just non obvious
<gary_poster> marcoceppi, ack.  OTOH, if you go to the port and the web page says "hey!  I don't have a db!" that would potentially help folks.  <shrug>
<marcoceppi> gary_poster: RTFM ;)
<gary_poster> marcoceppi, lol, ok :-)
<marcoceppi> gary_poster: but that's also an interesting idea
<jcastro> like at the charm level?
<marcoceppi> gary_poster: actually, that's kind of a cool idea, might be worth discussing as a potential best practice
<jcastro> "hey don't forget to relate mysql dude."
<marcoceppi> jcastro: yeah, so open port 80 after install and put a placeholder page that says what to do to get the charm working
<marcoceppi> jcastro: once you add db, that page goes away
<jcastro> yeah
<jcastro> man you know what
<jcastro> a standard placeholder page, that could link back to the readme, etc.
<jcastro> for those cases where you just deploy from the store and can't be bothered to read anything
<jcastro> until it doesn't work ...
<marcoceppi> jcastro: not only that, what if the standard placeholder embeded the readme and rendered it if RST/MD?
<gary_poster> this all sounds great to me :-)
<jcastro> "Sorry, this service isn't up and running yet, you might have missed a relation, here's the readme to get you started"
<jcastro> gary_poster, I read that as "This sounds great, let me know when you guys are done and I want to be clear this has nothing to do with the GUI."
<jcastro> :p
<gary_poster> jcastro, lol
<gary_poster> the "I want to be clear..." part was not intended, but I'm fine with it. ;-)
<stub> marcoceppi: I'm looking at amulet to see if I can use it to replace chunks of my test suite. I need to test that my charm copes with the environment being modified (units added, removed etc.).
<stub> marcoceppi: The deployer seems to let me build and deploy an environment, but not modify it.
<marcoceppi> stub: yeah, I'm adding removing/adding units outside of deployer working in the Deployment module for amulet, it's on track for 1.1 release
<stub> marcoceppi: Do you think it will cope if I add or remove units outside of deployer? I'm after the improved waiting
<stub> oh, that would work for some tests (when I need to remove stuff), but not for the ones where I need to add units.
<marcoceppi> stub: it should* since it scrapes juju status for actual information
<marcoceppi> unit modification will work, since all the relations are already modeled via the proxy relation and subordinates
<marcoceppi> just adding services wont' work since Deployment needs to re-create a new subordinate, etc
<stub> I'm also loving doing fast runs by reusing machines. I hope I can keep that with juju deployer (maybe just a choice between terminate services and terminate machines in the tear down?)
<marcoceppi> stub: I've not tested removing stuff outside of deployer, it should theoretically work
<marcoceppi> stub: I dont' expose any juju-deployer command line options yet, but certainly could
<marcoceppi> stub: that being said, feedback for changes are more than welcome, let me know what you need to test successfully and I'll be happy to implement (within reason)
<stub> marcoceppi: https://code.launchpad.net/~stub/charm-helpers/test-harness/+merge/181865 is what I'd like to replace with amulet. I was originally thinking of it going into charm-helpers, but now I think it is better in Amulet.
 * marcoceppi reads
<stub> huh... thought I'd already flagged it as wip, not needs review
<stub> (I don't care about the API, just being able to construct and destruct a deployment piece by piece and the sentries for real waiting)
<marcoceppi> stub: right, so you can construct and wait currently, deconstructing will have to happen outside of deployer, and I'd rather use the API for that to avoid calls to subprocess as much as possible
<marcoceppi> Not sure when, but hopefully around the sprint if not shortly after I should have 1.1 near ready for release
<stub> ok, I wasn't sure if juju-deployer supported modifying existing deployements, or just bootstrapping new ones.
<stub> No hurry for me - I have what I need at the moment. Do you think I'm doing the right thing keeping the code just in my charm, rather than landing it in charm-helpers or somewhere?
<marcoceppi> stub: that's fine, but you're more than welcome to push to charmhelpers. Amulet isn't designed to be the only testing solution out there
<stub> I don't want us to end up supporting two similar code bases, one of which will likely just be a confusing wrapper around Amulet :)
<stub> I could add the fixture into Amulet instead when it has the functionality it needs.
<marcoceppi> stub: I'd be happy to accept that
<adam_g> getting an error bootstrapping b/c no tools are available for 1.16.. are these being published soon?
<ahasenack> adam_g: I asked in the mailing list, I don't know why 1.16 was pushed to the *stable* ppa before tools were available
<ahasenack> same error here, btw
<adam_g> ahasenack, yea. ugh
<ahasenack> adam_g: weird, it says "In addition, no tools could be located to upload."
<ahasenack> adam_g: but juju bootstrap --upload-tools seems to find some
<ahasenack> maybe it found them somewhere in my trunk checkout
<adam_g> ahasenack, i just reverted back to 1.15.1
<ahasenack> adam_g: oh, you had it
<ahasenack> I had 1.14
<ahasenack> which is gone from the ppa
<adam_g> ahasenack, i just found an one lying around in /var/cache/apt/archives :)
<adam_g> *old one
<ahasenack> hehe
<ahasenack> I have this bad habit of running apt-get clean after an upgrade
<jcastro> marcoceppi, hey did you try null/manual provisioning yet?
<marcoceppi> jcastro: not yet
<marcoceppi> sinzui: does 1.16.0 fix build issues with brew?
<marcoceppi> also \o/ 1.16.0 is out
<sinzui> marcoceppi, yes it does.
<sinzui> I still have yet to verify it on my own mac before making a pull request
<sinzui> ^ marcoceppi
<marcoceppi> sinzui: ah, cool. I know they (brew ppl) were asking me about the 1.14.1 build hiccup
<sinzui> marcoceppi, The error was bad logic in the juju-core build, When we added support for Windows, we lost support for all *nix. We put that back
<marcoceppi> sinzui: cool
<sinzui> marcoceppi, arosales_ I just confirmed that home brew loves juju 1.16.0. It builds for me. I will make a pull request
<marcoceppi> sinzui: awesome! Thanks
<arosales> sinzui, good to hear
<arosales> sinzui, good to hear. Thanks for submitting it.
<sinzui> arosales, marcoceppi I see the pull is marked as good https://github.com/mxcl/homebrew/pull/23186
<zradmin> anyone have a gomaasapi error with 1.16? the juju deployer is telling me it cant find addresses for instances in maas and cannot provision new machines for some reason
<davecheney> :(
<davecheney> zradmin: can you give some more output
<davecheney> maybe there is magic that can be aplied
<zradmin> davecheney: sure what do you need? currently its just giving me a generic 409
#juju 2013-10-11
<davecheney> bummer
<davecheney> the maas side maybe logging something more verbose
<davecheney> maas to the client is very turse
<davecheney> terse
<zradmin> im back now, we had a bit of an isp issue
<zradmin> in the logs its giving me a 401 (unauthorized) error now, it did let me deploy a few services before starting this behavior though. API keys are correct for maas
<davecheney> zradmin: is that the juju log or the maas log
<davecheney> from the little i know abut maas
<zradmin> davecheney: juju log
<davecheney> the client facing logs are pretty terse
<davecheney> look inthe maas log
<davecheney> that is where the juicy details are
<zradmin> davecheney: just getting a more verbose version of the unauthorized message http://pastebin.ubuntu.com/6220330/
<davecheney> bloody hell python
<davecheney> what is an error and what is an exception
<davecheney> look, i dunno if I can help you
<zradmin> i know right?
<davecheney> i don't have much maas experience and maas has clearlu decided that you don't own that resource
<davecheney> and it won't be disuaded from that
<zradmin> lame i cant even destroy the juju environment
<zradmin> ok i removed the last node i registered with maas and now it process my destroy environment command
<zradmin> very strange
<davecheney> zradmin: i'm sorry to handball you, but you might get better advice in #maas
<zradmin> davecheney: no problem I came here first because the juju update was the only thing that changed in the environment. Its working again now for some reason though so I'm fine at the moment, thanks for your assistance
<davecheney> kk
<julianwa> hello, how can I know which hooks will be called when I restart juju unit agent?
<davecheney> julianwa: it depends on what the state of the unit agent was when you restart it
<davecheney> in general, no hooks will be called
<julianwa> davecheney: hmm.  Is there a document to tell what should be called under what state?
<davecheney> julianwa: no
<davecheney> but that wasn't really the question you first asked
<davecheney> can I ask, what is the problem you want to solve ?
<julianwa> davecheney:  does this described in charm?
<davecheney> nope
<davecheney> it's sort of in the docs
<davecheney> but, you haven't answered my questoin
<davecheney> the reason why i'm pushing back is the unit agent does not run the sevice
<davecheney> if you stop the unit agent
<davecheney> mysql doesn't stop, for example
<davecheney> juju isn't a process manager
<davecheney> the only way restarting would/wouldn't help would be if the unit agent was restarted *during* hook execution
<jujulearner> hi guys
<jujulearner> i need some help in bootstrapping a EC2 environment, can anyone spare a few minutes to help? Thanks!
<Anju> hii anybody around?
<Anju> where I can set path to set juju logs
<Anju> anybody around?
<Anju> ???????
<freeflying> Anju, what do you mean set path?
<Anju> freeflying:  are u there
<Anju> sorry I was away
<freeflying> Anju, what is your question about
<Anju> freeflying:  i want to know if want to use juju in openstack
<Anju> then where can i see the logs
<freeflying> Anju, after you bootstrap, you can use juju debug-log to check the logs
<Anju> yes I read this soemwhere
<freeflying> Anju, and for sure, juju supports openstack, highly recommend to use :)
<Anju> but all logs not save in a file
<Anju> freeflying:  opensatck have many components
<Anju> if ai want to see particular logs for a particular component
<freeflying> Anju, you mean logs from openstack itself or the logs from the vm you start
<Anju> I want logs of juju
<freeflying> Anju,  juju debug-logs collect log from node it deployed
<freeflying> Anju, and those logs are stored in the node juju deployed too
<Anju> stored in the node juju deployed too ???
<Anju> can you please tell me more
<Anju> ?
<freeflying> after you deploy ay service, you can juju ssh <service>/0 (the unit), and check the log under /var/log/juju/
<Anju> ohhkkk
<Anju> freeflying:  thanku so much
<freeflying> Anju, np
<Anju> freeflying:  did you use juju?
<freeflying> Anju, sure
<Anju> ohkk
<jcastro> evilnickveitch, ok I've asked alanbell to check out the manual provisioning docas
<jcastro> and see if we can get some feedback
<evilnickveitch> jcastro, cool
<jcastro> I'm going to ask some other people to try it too
<evilnickveitch> I am working on the other configs
<evilnickveitch> yes! that would be great
<jcastro> jamespage, would today be a good day to ask about the openstack bundle or are you guys in release crunch still?
<jamespage> bit cruchy still tbh
<jamespage> but I can spin some time in-between helping zul push the second set of RC's from upstream
<jamespage> jcastro, for the bundle what do you want?
<jcastro> like so, are we going to publish it at a certain place or ... ?
<jamespage> jcastro, it might be better after we actually land the changes for havana :-)
<jamespage> those are still to be done
<jamespage> well the changes are done; they are just not in the charm-store yet
<jcastro> ok
<jcastro> so in other words, I should wait
<AlanBell> jcastro: did you?
<jcastro> AlanBell, see your G+
<jcastro> but here's the URL
<jcastro> https://juju.ubuntu.com/docs/config-manual.html
<AlanBell> oh cool :)
<nesusvet> I tryied to deploy the node, but after deployment it returns: 2013-10-11 07:15:54 DEBUG juju.state open.go:88 connection failed, will retry: dial tcp 127.0.0.1:37017: connection refused
<nesusvet> hello everyone
<nesusvet> I guess there a problem in different version on the mongodb
<nesusvet> and I guess I should add the stable-juju repository to this file: /etc/apt/sources.list before installation, but I don't know how
<ahasenack> hi guys, is that reported size the size of the charm? http://pastebin.ubuntu.com/6223679/
<ahasenack> and, it copied it to my machine and then uploaded it to the bootstrap node?
<ahasenack> hello, anyone here?
 * sarnold waves to ahasenack :)
<ahasenack> :)
<sarnold> .. I've just got no idea. sorry. :)
<ahasenack> sarnold: got it from the juju-gui guys, they are bundling the juju-gui tarball inside the charm now
<ahasenack> sarnold: that's a 45Mb download from the charm store to my computer, and then a 45Mb upload from my computer to the bootstrap node
<kurt_> jcastro: ping
<Tim> Quick question: Is there a good place to follow development information and news? I'm trying to deploy juju on a very large cloud and I'm having a hard time finding current information on changes
<Tim> like juju tools on s3
<jcastro> kurt_, pong!
<jcastro> Tim, there are two mailing lists
<jcastro> juju and juju-dev
<jcastro> You can sub to them from here: https://lists.ubuntu.com/
<Tim> Ah, thanks. Keeping up with what works (and a lot that doesn't) has been a real headache
<Tim> jcastro: Like the fact that the tools on s3 are currently a blank file ;)
<ahasenack> Tim: really? what juju-core version do you have?
<ahasenack> just curious if it's the recent 1.16 build
<Tim> It is
<Tim> 1.16.0-precise-amd64
<ahasenack> Tim: and do you have public-bucket-url or tools-url defined in environments.yaml? Do you have a pastebin of juju bootstrap -v?
<Tim> ahasenack: When bootstrapping I get 'WARNING no tools available, attempting to retrieve from https://juju-dist.s3.amazonaws.com/ ERROR Get https://juju-dist.s3.amazonaws.com/tools/releases/juju-1.16.0-quantal-i386.tgz: EOF'
<Tim> I'll run it verbosely
<jcastro> sinzui, ^^^
<ahasenack> hm, the quantal url seems to be https://juju-dist.s3.amazonaws.com/tools/juju-1.16.0-quantal-amd64.tgz
<ahasenack> I don't know about that "/releases/" path component
<Tim> ahasenack: https://gist.github.com/timfallmk/6940517
<Tim> The s3 tools download is new to me since I last tried juju
<ahasenack> Tim: it's a fallback download. What is your environment?
<ahasenack> Tim: I mean, is it aws, openstack, hp cloud, lxc, ...?
<Tim> One in OpenStack, one in MAAS
<Tim> and one in LXC :)
<Tim> The one I'm trying now happens to be in MAAS
<ahasenack> so looks like this time it downloaded the quantal tarball, but failing on the raring one
<ahasenack> could it be a temporary error?
<ahasenack> s/failing/failed/
<Tim> ahasenack: Seems to be persistent
<Tim> Also, I thought I had the tools installed from the initial repo
<ahasenack> Tim: but above you pasted a line where it failed fetching the quantal tarball
<Tim> Hmm, you're right
<Tim> curious
<ahasenack> but yeah, tools are a pain, there is always something slightly wrong
<Tim> Weird, now it's giving me the failure for that
<Tim> raring I mean
<sinzui> Tim, I will look Your url is correct for 1.16.0 + and simplestreams
<Tim> sinzui: I'm not sure why it would fail on quantal until I ran it with -v
<Tim> sinzui: Hello, I just ran it with -v again, and it moved on to failing on saucy
<Tim> so, to summerize, it fails persistently when run non verbosely. But can complete only the *next* step when run with -v each time
<ahasenack> so weird
<sinzui> Tim: interesting. I always run with --show-log (formerly -v)
<Tim> ahasenack: Very weird
<ahasenack> maybe it is always failing on the next on, but only -v shows you that
<Tim> sinzui: IF I run it repeatedly until it completes everyone, then it moves on
<ahasenack> s/on/one/
<Tim> well it gives the failure line when run without v
<Tim> and its always quantal
<ahasenack> ah
<Tim> ahasenack: or was
<kurt_> Hi Guys - anyone know when 1.16 may be ready?  I'm looking for the fix to bug 1236734
<_mup_> Bug #1236734: juju 1.15.1 polls maas API continually <juju-core:Fix Released by gz> <https://launchpad.net/bugs/1236734>
<ahasenack> kurt_: 1.16 is in the stable ppa, you can download it
<Tim> ahasenack: Aha, its completed. Now I get the good old " gomaasapi: got error back from server: 409 CONFLICT"
<Tim> thanks guys. Weird error
<ahasenack> that's where my knowledge of maas ends
<Tim> ahasenack: IT's maas node issue
<Tim> :)
<Tim> Thanks all!
<sinzui> Tim have you set your tools-url:? For AWS it is https://juju-dist.s3.amazonaws.com/tools . This may not help since fallback is obviously looking in the correct place in the end
<kurt_> ahasenack: does it include fix for 1236734?
<kurt_> Tim: do you see that error consistently?
<ahasenack> kurt_: I don't know
<kurt_> ahasenack: ok, just looked it up.  Apparently it doesâ¦ https://launchpad.net/juju-core/+milestone/1.16.0
<jcastro> AlanBell, hey so https://bugs.launchpad.net/juju-core/+bug/1238934
<_mup_> Bug #1238934: manual provider doesn't install mongo from the cloud archive <manual-provider> <juju-core:Triaged> <https://launchpad.net/bugs/1238934>
<jcastro> I have failed you
<jcastro> but I will try again!
<kurt_> sinzui: ping
<AlanBell> jcastro: aww :)
<AlanBell> I will give it a go at some point over the weekend I think
<AlanBell> mongodb with juju could well be interesting to me as it happens, http://exceptionalemails.com/ has a mongodb back end (plus some python and PHP bits)
<AlanBell> but that was designed to scale out (not that it is likely to ever *need* to scale out)
<Nik_> Hi all... I wanted to clarify a couple of things regarding hooks and sequence of events... Can someone help with that? for example, I am not sure when configuration-changed is invoked
<Nik_> Does it get invoked when the service is initialy started or installed and does it get invoked when someone supplies --config to the deploy command
<marcoceppi> Nik_: when a charm is deployed, the following hooks will run: install, config-changed, start
<marcoceppi> Nik_: everytime `juju set` is run, config-changed hook will execute
<marcoceppi> Nik_: so it'll always run at least once
<Nik_> Cool good to know not to invoke it explicitly.
<Nik_> Now does start get invoked if a relationships is optional: true?
<Nik_> relationship^^
<Nik_> err
<Nik_> If relationship is optional: false (hence required), does start get invoked beffore the relationship joins?
<Nik_> And yes, what's the meaning of optional: false in the requires: section
<marcoceppi> Nik_: at this time all relations are optional, and setting them to false does nothing
<marcoceppi> Nik_: that whole system isn't implemented yet, iirc
<Nik_> Oh I see....
<Nik_> And what about relaton-joined vs relation-changed.
<Nik_> It said that changed gets invoked if the relation rejoins
<Nik_> And I'm not clear on the part "rejoins" what does it mean? If a relationship departs after remove relation, isn't rejoining considered relation-joined?
<marcoceppi> Nik_: that's not quite true. Relation-joined happens once per relation per unit, it's like the "install" hook of the relation sequence, relation-changed happens whenever there is new data on the relation wire, so the relations send stuff with relation-set and that triggers changed
<marcoceppi> Nik_: yes, if you remove a relation, then re-create it, joined is run again
<Nik_> Oh and new data on the wire means stuff like relation-set?
<Nik_> But if relation-set is invoked multiple times
<marcoceppi> Nik_: then changed will execute each time
<Nik_> rel-changed will also get invoked mutliple times?
<marcoceppi> Nik_: yes, all hooks need to be idempotent
<marcoceppi> any hook could be executed multiple times
<Nik_> Ok so then the client can simply keep restarting multiple times, if needed until it gets all the config values from the service
<Nik_> Or, I guess a server can set some flag on the relation like done=yes to cause the client to read all the config information
<sarnold> ahasenack: _wow_, 45M up and down, that's a significant deterrent to using juju-gui over e.g. a DSL line ..
<ahasenack> yeah, that sucked a bit
<ahasenack> as in, I can't deploy it from here where I happen to be now
<sarnold> ahasenack: though I've seen enough complaints about charms needing access outside of a private cloud to deploy, and the trouble or impossibility of punching holes..
<sarnold> ahasenack: heh, no more coffeeshop demos :)
<ahasenack> sarnold: or outside of a big city in Brazil :)
<sarnold> ahasenack: heh I thought even the small cities were big :) hehe
<ahasenack> well, 200k inhabitants, but good adsl service only in the downtown area. 3km out and you are out of luck
<Nik_> And I had one last queston (for anyone who can answer) ... Wen a server charm provides: my-service: interface:http for example just for clients, for the purpose of them obtaining its public address to the clients when they join the relation, is there a way for the server to specify that they are not interested in my-service-relation-joined so that juju doesn't spit out errors in the log that hook does not exist?
<Nik_> I belive that juju executes my-service-relation-joined hook on the server and complains
<sarnold> ahasenack: wow, I wouldn't have expected 3km to be the difference between good vs bad internet access..
<ahasenack> yeah, sucks, here I get 1.5Mb/0.4Mb on a good day
<ahasenack> downtown 15Mb/1Mb and up
<sarnold> wow
<AskUbuntu> MAAS Set password for node | http://askubuntu.com/q/356936
#juju 2013-10-12
<AskUbuntu> Some questions about juju and maas | http://askubuntu.com/q/357309
<AskUbuntu> Can juju detect down service? | http://askubuntu.com/q/357328
#juju 2013-10-13
<lamont> I am failing to recall how I got juju-1.14.1-precise-amd64.tgz into my juju-dist setup in my private cloud, and now have run into the fun that is juju 1.16.0 and no tools tarball.  hints anyone?
 * lamont finds something at least usable, if it's not just doing it all by itself
<marcoceppi> lamont: you still having juju/private cloud problems?
<lamont> marcoceppi: not sure how much of it is me just being slow.
<lamont> what format does the --source for juju sync-tools need to be in?
<lamont> trying to get 1.16 to serve bits from the local juju-dist swift container
<lamont> but actually heading afk for the rest of the night, before I get yelled at too much
<marcoceppi> lamont: you shouldn't need to specify --source, sync-tools should just work(tm)
<marcoceppi> lamont: have a good night o/
<lamont> marcoceppi: cool
#juju 2014-10-06
<katco> anastasia: welcome!
<lazypower|Travel> o/ Welcome anastasia
<marcoceppi> \o anastasiamac
<gnuoy> jamespage, https://code.launchpad.net/~james-page/charms/trusty/hacluster/mix-fixes/+merge/235675
<gnuoy> I've added some comments
<jamespage> gnuoy, I've updated the readme but would like to defer any other refactoring for now
<jamespage> gnuoy, next cycle
<jamespage> :-0)
<gnuoy> sure, understood
<gnuoy> jamespage, Two tiny fixes need for https://code.launchpad.net/~james-page/charms/trusty/keystone/https-multi-network/+merge/236905
<mgz> gnuoy: do you and jamespage have a psychic link for the review comments?
<gnuoy> mgz, how do you mean ?
<mgz> gnuoy: ...you actually used inline comments, featurey
<gnuoy> It's a splendid feature.
<jamespage> gnuoy, keystone tidied
<jamespage> gnuoy, we have a break with latest juno glance and the charms
<jamespage> seems we need to tweak configuration a bit
<gnuoy> jamespage, ok, want me to take a look ?
<jamespage> gnuoy, yes that would be helpful
<jamespage> gnuoy, looks like alternative stores such as rbd and swift are disabled by default and have to be switched on in glance-api.conf
<jamespage> gnuoy, https://code.launchpad.net/~james-page/charms/trusty/openstack-dashboard/juno-support/+merge/237269
<jamespage> if you would be so kind :-~)
<gnuoy> sure
<josepht> is there a way to make a relation hook wait for another to finish before continuing?
<gnuoy> jamespage, enable_security_group has gone, is that deliberate ?
<jamespage> gnuoy, its gone upstream
<gnuoy> jamespage, ditto COMPRESS_OFFLINE ?
<jamespage> gnuoy, gone as well
<jamespage> gnuoy, the configuration option will no-op >= juno
<stokachu> josepht, i dont believe so
<jamespage> gnuoy, we compress online always now
<gnuoy> jamespage, approved
<jamespage> gnuoy, thanks for the review
<gnuoy> np
<jamespage> gnuoy, so what do I need to review from you still? cells and vxlan right?
<jamespage> or did I already land vxlan?
 * jamespage <- weekend brain
<gnuoy> jamespage, I think you said you'd done vxlan
<gnuoy> jamespage, and cells is not ready for rereview yet
<jamespage> ack
<jamespage> so you're not waiting on me right now
<jamespage> ok
<jamespage> gnuoy, if you are still working on cells, I can look at glance now
<jamespage> have all distro work clear for now
<gnuoy> jamespage, nope, I'll revist cells tomorrow am
<gnuoy> :q
<bodie_> where does juju stand wrt docker and lxc right now, or where can I find this info?
<edmar_> hi guys, I started today with juju and i have first problem: i created a machine with precise in local enviroment and i destroyed this. But now, i am trying create new machine with precise, but service doesn't start.  Anyone had this problem? thanks
<edmar_> I destroy my enviroment and create again, it works
<sebas5384> Just sharing with you guys: Get started with Juju and Ansible https://github.com/TallerWebSolutions/demo-ansible-and-juju
<sebas5384> We are having some problems with pending machines in juju-local
<sebas5384> someone is having this issues?
<stokachu> sebas5384, not today
<sebas5384> stokachu: we have already 3 reports here, where the machines just stay in pending
<stokachu> sebas5384, anything in the unit logs?
<sebas5384> from different people
<stokachu> what version of juju as well
<sebas5384> the unit ins't even create the log file
<sebas5384> stokachu: Let me get that info
<sebas5384> stokachu: 1.20.9-trusty-amd64
<stokachu> sebas5384, ok hmm, im still on 1.20.7
<sebas5384> hmm maybe something in the new version broke something
<stokachu> sebas5384, its possible, as a test could you downgrade back to 1.20.7?
<stokachu> that at least works for me
<sebas5384> hmmm stokachu I can test that :)
<stokachu> sebas5384, yea just to rule out if its a version change
<sebas5384> stokachu: sure!
<sebas5384> stokachu: we just destroy the environment
<sebas5384> and then we tried to deploy a new charm
<sebas5384> and then it worked
<sebas5384> stokachu: do you have any ideia how to downgrade version?
<stokachu> sebas5384, probably just use apt-get to install an older version
<sebas5384> ahh ok
<sebas5384> thanks :)
<sebas5384> stokachu: yep! with the older version is all working Â¬Â¬
<edmar_> sebas5384: Do you try with 1.20.9 too?
<sebas5384> edmar_: yep
<sebas5384> and it's having that problem, but only works after destroying the environment
<edmar_> Is it possible stop the machine and after start again?
<stokachu> sebas5384, good to know, probably file a bug with your findings
<stokachu> sebas5384, we've tested 1.20.8 and everything seems to work
<stokachu> so you could narrow it more to a problem between 1.20.8 and 1.20.9
<stokachu> but this seems like a regression to me
#juju 2014-10-07
<ayr-ton> Someone here have experienced to use juju with puppet?
<jamespage> gnuoy, just syncing up with neutron-openvswitch - https://code.launchpad.net/~james-page/charms/trusty/neutron-api/hyperv-support/+merge/237370
<gnuoy> jamespage, approved
<gnuoy> jamespage, fix for glance on juno https://code.launchpad.net/~gnuoy/charms/trusty/glance/list-known-stores/+merge/237372
<jamespage> gnuoy, https://code.launchpad.net/~james-page/charm-helpers/fixup-haproxy-maas/+merge/237376
<jamespage> fairly urgent - my landings yesterday broke on MAAS
<gnuoy> sure, looking
<gnuoy> jamespage, approved
<jamespage> gnuoy, not going to raise MP's for the re-sync
<gnuoy> sure
<jamespage> as I consider those trivials
<jamespage> gnuoy, feedback on glance charm fixes
<jamespage> gnuoy, needs to be more conditional on relations being present
<gnuoy> I did think about doing that but I didn't see the point tbh. Still, it's trivial either way
<gnuoy> jamespage, I've updated https://code.launchpad.net/~gnuoy/charms/trusty/glance/list-known-stores/+merge/237372
<jamespage> gnuoy, almost!
<gnuoy> aarrghhh
<jamespage> gnuoy, glance can support multiple backend stores with a default option now
<jamespage> gnuoy, so having the list is valid, but we need to populate it based on what's related to glance
<gnuoy> ack, will do
<jamespage> gnuoy, quicky for you - https://code.launchpad.net/~james-page/charm-helpers/worker-context/+merge/237409
<jamespage> just sweeping up on the worker scaling fixes we did in nova-cc -> neutron-api, glance, cinder and keystone
<gnuoy> jamespage, approved
<jamespage> gnuoy, and landing for that in:
<jamespage> https://code.launchpad.net/~james-page/charms/trusty/nova-cloud-controller/worker-config/+merge/237417
<jamespage> https://code.launchpad.net/~james-page/charms/trusty/cinder/worker-config/+merge/237413
<jamespage> https://code.launchpad.net/~james-page/charms/trusty/neutron-api/worker-config/+merge/237416
<jamespage> https://code.launchpad.net/~james-page/charms/trusty/keystone/worker-config/+merge/237415
<jamespage> https://code.launchpad.net/~james-page/charms/trusty/glance/worker-config/+merge/237414
<jamespage> sorry ;-)
<jamespage> I've just finished testing that lot
<gnuoy> jamespage, I testing a utopic-juno deploy now but assuming that works I think I've addressed your comments in http://pad.ubuntu.com/nova-cell-feedback
<jamespage> gnuoy, utopic-juno is bust
<jamespage> https://bugs.launchpad.net/ubuntu/+source/mysql-5.5/+bug/1378359
<mup> Bug #1378359: mysql-server crash when syncing neutron database schema <amd64> <apport-bug> <ec2-images> <openstack> <osci> <utopic> <mysql-5.5 (Ubuntu):In Progress by james-page> <https://launchpad.net/bugs/1378359>
<jamespage> dealing with that atm
<gnuoy> jamespage, ok, cell charms are working with  trusty-icehouse so if you get a moment to have a look that'd be great
<jamespage> gnuoy, ack will do
<gnuoy> jamespage, 73rd time lucky https://code.launchpad.net/~gnuoy/charms/trusty/glance/list-known-stores/+merge/237372
<gnuoy> jamespage, we're defaulting the number of glance workers to be twice the machine cpu count but upstream seems to suggest it should match the cpu count. What's the thinking there ?
<jamespage> gnuoy, ok - look in a sec
<jamespage> gnuoy, we discussed worker count a while back, and based on the testing I did last cycle at scale, most servers can deal with x4 cpu's as workers
<jamespage> however that seemed a little extreme so we defaulted to 2x
<gnuoy> kk
<gnuoy> jamespage, looking at http://docs.openstack.org/icehouse/config-reference/content/setting-flags-in-cinder-conf-file.html and http://docs.openstack.org/trunk/config-reference/content/section_cinder.conf.html
<gnuoy> osapi_volume_workers is not listed for icehouse
<jamespage> gnuoy, hmm
<gnuoy> This looks like a documentation bug tbh. Did you test on icehouse ?
<jamespage> gnuoy, it is a documentation bug - I tested on icehouse OK
<jamespage> gnuoy, its in the sample conf file in the source tree for icehouse
<gnuoy> kk
<jamespage> gnuoy, so close...
 * gnuoy weeps
<jamespage> gnuoy, sorry - feedback on MP
<jamespage> gnuoy, you context was a bit non-standard as well
<gnuoy> jamespage, got a sec for another look?
<jamespage> gnuoy, beautiful - unit test?
<jamespage> then +1
<gnuoy> tip top, ta
<gnuoy> jamespage, https://code.launchpad.net/~gnuoy/charms/trusty/glance/list-known-stores/+merge/237372 <- now added unit tests
<lovea> it
<jamespage> gnuoy, +1 on that - please merge away!
<ayr-ton> Juys, I'm trying something like this: http://www.slideshare.net/lynxmanuk/juju-puppet-puppetconf-2011
<ayr-ton> So..
<ayr-ton> When deploy an app using puppet master, it take care about relations, right?
<ayr-ton> I should use somethink like: juju deploy puppetmaster, juju deploy mysql, juju deploy wordpress, and then use puppet master for manage the relations.
<ayr-ton> Or, should I make something different?
<ayr-ton> And also, do you guys would install charms using puppet dsl? (:
#juju 2014-10-08
<stokachu> when writing charm tests are they run from the perspective of a deployed charm in a juju environment and ran from the /var/lib/juju/charms/path?
<stokachu> like when `make test` is run from the automated charm tester the charm is already in a deployed state?
<stokachu> or do i have to run through the actual deploy and relations commands first
<stokachu> im guessing this is what amulet does from what i can see in the juju docs
<stokachu> marcoceppi_, do i have the ability to query the api state server address and password from the jenv files during an automated test run?
<stokachu> anyone know if i have the ability to query the api state server address and password from the jenv files during an automated test run?
<perrito666> I dont think any of you know how to get a new thinpad kb in brussels rigt?
<perrito666> oops wrong channelsorry
<sebas5384> imagine a charm with my IDE all configured https://www.stgraber.org/2014/02/09/lxc-1-0-gui-in-containers
<sebas5384> where I can relate it to other charms in order to edit them
<sebas5384> :)
<sebas5384> this could be possible?
<stokachu> sebas5384, get pass that pending machine issue?
<stokachu> or were you able to file a bug?
<sebas5384> I didn't comeback to that
<sebas5384> after you destroy de environment all gets normal in the new version
<sebas5384> i will get more information with the team and file a bug :)
<sebas5384> stokachu: thanks for reminder me!
<stokachu> sebas5384, np :D msg me the bug when you get it created
<sebas5384> stokachu: sure :D
<stokachu> the upgrade of juju using an existing juju env shouldnt break
<jcastro> Does someone have time to review this? https://bugs.launchpad.net/charms/+bug/1272083
<mup> Bug #1272083: HPCC Charm initial check-in for review <Juju Charms Collection:New for xwang2713> <https://launchpad.net/bugs/1272083>
<stokachu> tvansteenburgh, my memory is a little foggy but was it you i was talking to about the charm testing a little while back?
<tvansteenburgh> stokachu: probably
<stokachu> tvansteenburgh, ok, i was wondering if during a charm test do i have access to the jenv file for getting the api state server and admin password?
<stokachu> or are those defined in an environment variable somewhere
<tvansteenburgh> stokachu: hm, i'm not sure
<tvansteenburgh> obviously it would work fine if you were running the test locally
<stokachu> yea, i noticed amulet will do some type of deployer setup for setting up the charm to be tested
<tvansteenburgh> and i *think* it would work on jenkins too
<tvansteenburgh> stokachu: yeah amulet basically makes a deployment file and hands it off the juju-deployer
<stokachu> tvansteenburgh, do i have the ability to run charm tests using jenkins without proposing it to charm store?
<tvansteenburgh> stokachu: no, but i could do it for you
<tvansteenburgh> stokachu: also, you can just run bundletester directly, as that was jenkins does
<tvansteenburgh> stokachu: this may be helpful -> http://blog.juju.solutions/cloud/juju/2014/10/02/charm-testing.html
<stokachu> tvansteenburgh, is charmguardian installed as part of bundletester?
<tvansteenburgh> stokachu: actually it's the other way around
<tvansteenburgh> but you can install bundletester by itself too
<tvansteenburgh> bundletester can be pip installed
<tvansteenburgh> charmguardian, you need to clone source and run make
<stokachu> ok cool, so if bundletester passes locally then theoretically it'll work in jenkins
<tvansteenburgh> stokachu: yep
<stokachu> tvansteenburgh, ok ill use this locally and when ready maybe have you kick off a charm test for me to make sure i can query the api server?
<tvansteenburgh> stokachu: sounds great. just curious, are you using python-jujuclient for the api calls?
<stokachu> tvansteenburgh, we have a python3 version of python-jujuclient that we use
<stokachu> https://github.com/Ubuntu-Solutions-Engineering/macumba
<tvansteenburgh> stokachu: ah, right on
<tvansteenburgh> stokachu: are you in Brussels this week?
<stokachu> tvansteenburgh, nah unfortunately
<stokachu> tvansteenburgh, im in austin in nov though
<tvansteenburgh> ah okay, just curious
<tvansteenburgh> i'm in Brussels, and it's time for beer now :)
<stokachu> tvansteenburgh, haha enjoy
<stokachu> thanks for the help this gets me most of the way there
<tvansteenburgh> stokachu: will do, catch ya later
<tvansteenburgh> great, glad to help!
<stokachu> cya
<tvansteenburgh> cya
#juju 2014-10-09
<gnuoy> jamespage, I'm going to look at switching to using a context for populating the cells.json but given it's a file that contains a json dump I don't see much point in having a normal template. I was just gong into go for a template which contains {{ cells_data }} are you likely to -1 that :)
<jamespage> gnuoy, no that's fine
<gnuoy> thanks
<jamespage> gnuoy, does https://code.launchpad.net/~gnuoy/charms/trusty/hacluster/unicast-support/+merge/228658 need a rebase and fixup based on ivoks feedback>
<jamespage> ?
<gnuoy> jamespage, yes, it needs the fix you suggest but it needs lots of love since the hacluster charm rejig
<jamespage> gnuoy, ok marking WIP for now
<gnuoy> jamespage, kk
<jcastro> kwmonroe, https://bugs.launchpad.net/charms/?field.tag=audit
<jamespage> niedbalski, hey - what do you think about my sysctl comments on your ceph charm merge-proposals? I think we can make things better by default :-)
<rbasak> rick_h_, frankban: do you have a requirement to land the newer juju-quickstart in Utopic? Because none of the FFes have been approved.
<rbasak> (yet)
<rbasak> And release is coming up real soon now.
<rick_h_> rbasak: :( did we do something wrong on the FFE to get them approved?
<rick_h_> rbasak: I'm checking what was in it to see if there's anything that breaks, I don't think so but it's just stuff we wanted to get to users.
<rbasak> rick_h_: no, just that nobody from the release team have considered them.
<rick_h_> rbasak: ok, it's a bit of a sad face thing, but no I don't think there's a requirement
<jamespage> dosaboy, https://bugs.launchpad.net/charms/+source/quantum-gateway/+bug/1379324
<mup> Bug #1379324: Upgrade stable->next fails <openstack> <upgrade> <quantum-gateway (Juju Charms Collection):New> <https://launchpad.net/bugs/1379324>
<jamespage> gnuoy, https://code.launchpad.net/~james-page/charms/trusty/swift-storage/upgrade-fixes/+merge/237374
<jamespage> fixes the swift-storage upgrade
<dosaboy> jamespage: think i know what the issue is there
<jamespage> dosaboy, awesome
<jamespage> coreycb, some comments on https://code.launchpad.net/~corey.bryant/charms/trusty/cinder/remove-missing/+merge/237439
<jamespage> dosaboy, wanna give me a hint so I can fixup that upgrade issue?
<jamespage> gnuoy, dosaboy, coreycb: https://code.launchpad.net/~james-page/charms/trusty/swift-storage/upgrade-fixes/+merge/237374
<jamespage> needs a review - fixes upgrades from stable->next
<dosaboy> jamespage: fixing now
<dosaboy> v simple
<dosaboy> gimme min
<dosaboy> jamespage: https://code.launchpad.net/~hopem/charms/trusty/quantum-gateway/fix-bug-1379324/+merge/237790
<dosaboy> jamespage: had a quick scan, seems other charms are ok
<coreycb> jamespage, thanks, will change to bool and resubmit.  I used the string T/F to keep in sync with the overwrite option.
<jamespage> gnuoy, ah-ha - I have an upgrade bug with neutron-gateway + l2population
 * jamespage looks
<jamespage> gnuoy, dosaboy: https://code.launchpad.net/~james-page/charms/trusty/quantum-gateway/l2pop-upgrade-break/+merge/237792
<jamespage> we must default to off, otherwise upgrades in existing deployments will not work as the l2population driver is never enabled in the nova-cc charm
<dosaboy> jamespage: lgtm
<dosaboy> jamespage: added new 'backport-potential' tag to charms offical tags
<jamespage> dosaboy, awesome
<coreycb> jamespage, if that l2pop changes the config file it's gonna break the amulet test
<roadmr> hello folks! We're trying to get nested lxc containers working with juju (so juju deploy to a local lxc-backed environment, and my charm also starts an lxc container). But it's not working. We were told we need a couple of settings in the host lxc (i.e. the one that juju creates), but we don't know where to set this. Has anyone gotten nested lxc working while using juju local environment?
<jamespage> gnuoy, dosaboy, hey guys - I've seen alot of good merges and reviews from coreycb in ~openstack-charmers - so I'm proposing him for full membership
<jamespage> gnuoy, dosaboy: +1 from either of you and I'll add him into the team
<gnuoy> jamespage, +1
<jamespage> gnuoy, ta - welcome coreycb!
<coreycb> w00t!
<coreycb> thanks guys :)
<jamespage> coreycb, normal rules, get review, test stuff, but you can land changes now as well!
<coreycb> jamespage, got it, sounds good
<ayr-ton> Someone want to try my charm of zabbix server forked from marcoceppi_? https://manage.jujucharms.com/~ayrton/trusty/zabbix
<ayr-ton> I'm passing to some problems to connect it to mysql database. If someone figure out why, I would be very happy (:
<ayr-ton> The logs are in tail -f /var/log/zabbix-server/zabbix_server.log
<ayr-ton> Fixed. Now I'm getting error 500 from apache2, with no logs in /var/log/apache2 /o\
<niedbalski> jamespage, ping ^^ ,  re:sysctl , better defaults? which ones?
#juju 2014-10-10
<lazyPower|Sprint> stub: ping
<stub> lazyPower|Sprint: pong
<lazyPower|Sprint> Hey stub, i'm looking @ https://code.launchpad.net/~james-w/charms/precise/postgresql/metrics/+merge/231322
<lazyPower|Sprint> sorry incomplete thought and moving sessions, 1 moment
<stub> lazyPower|Sprint: Dammit... I thought my 2 months of PG charm patches was getting a review ;)
<lazyPower|Sprint> stub: they aren't in the queue that i see
<lazyPower|Sprint> stub: http://review.juju.solutions
<stub> lazyPower|Sprint: http://review.juju.solutions/review/1086, 6 days since last touch
<stub> (but this is the  rollup branch requested, containing MPs dating from over 2 months ago)
<lazyPower|Sprint> ooooo
<lazyPower|Sprint> tests
<lazyPower|Sprint> argh, rollup branches :(
<stub> Hey, it used to be in 7 nicely separated patches
<lazyPower|Sprint> wait, you mean this is 1 branch to review then/
<lazyPower|Sprint> <3 you just made my day, i thought i had to go back through the 7 pending branches
<stub> lazyPower|Sprint: Yeah, all those old branches got mushed together into this big MP on request
<lazyPower|Sprint> hrm, this is a rather large MP
<lazyPower|Sprint> i blame tvansteenburgh
<stub> Although I'm not sure why you guys like that - for the last 10 years everyone around here has been focused on ways of keeping the MPs of small and manageable size ;)
<lazyPower|Sprint> well, seeing the sheer size of this, i agree w/ that sentiment
<stub> lazyPower|Sprint: I've got another 1500 lines of diff nearly ready to go ;)
<lazyPower|Sprint> i'm just used to the GH workflow. I'm a LP noob
<stub> I'd consider separating review and landing
<stub> Review is just 'does this code look good'. 'Is this code an improvement'.
<stub> And you want a small, self contained diff for that.
<stub> Landing is about lint, tests etc. and ideally handled by a bot.
<lazyPower|Sprint> we're getting closer to that
<lazyPower|Sprint> our CI is just about there for this, which means we're literally wrapping the step before introducing bot landing
<lazyPower|Sprint> ty for fixing the tests, i'm about to pull and run this
<stub> Yup. I'm looking forward to that. I'm hoping to get the tests stable enough too, although that might make then annoyingly slow due to 'juju wait' needing a longer pause.
<stub> The earlier reviews had picked up some genuine races, which I think I've fixed.
<stub> But this branch still works better than trunk atm :)
<stub> lazyPower|Sprint: I was planning on landing https://code.launchpad.net/~james-w/charms/precise/postgresql/metrics/+merge/231322 once I've got my queue cleared (getting the charm in shape to deploy Launchpad with it atm). I think there were things james wanted to fix on it, but I don't recall what now and it should be in a landable state as it is.
<stub> Maybe just make the minor changes, stick run-one in the crontab stuff.
<stub> IIRC james was going to make some changes, but got diverted to other things.
<stub> But I want the statsd stuff in, so sod waiting.
<lazyPower|Sprint> ack
<lazyPower|Sprint> I'll be loking into merging this massive change
<lazyPower|Sprint> a bit preoccupied with charm store triage atm, but will circle back to this before i EOD
<zezom> I'm currently running proxmox at home on a single server + NAS but I'm quite impressed with what I've seen juju do. Unfortunately I am unable to tell if I can run MAAS and JUJU on the same single server. I would like the ability to add more servers in the future but for now one server is all I have.
<zezom> I've read through the MAAS install doco on ubuntu.com but it doesn't say if the MASS controller install can run any virtual's on it's self or if it's only used to bootstrap other nodes to run virtual's.
<lazyPower|Sprint> stub: looks like there's an interestinng mismatch between this branch and trunk.
<stub> really? I thought I had them in sync
<lazyPower|Sprint> Text conflict in hooks/hooks.py, Text conflict in test.py - i'm getting issues between the tests and hooks
<lazyPower|Sprint> possibly they were at one point, i haven't looked to see if anything has been merged recently,  i just pulled the source and started prepping my env to dive into this giant
<stub> lazyPower|Sprint: There was one revision unpushed... pushed it now.
<stub> lazyPower|Sprint: yeah, it was 'merge trunk'. Sorry about that.
<lazyPower|Sprint> all good :)
<stub> Launchpad is supposed regenerate the diff and highlight this :-(
<stub> That would be the lint fixes I landed for Adam, that were also lurking in this integration branch.
<lazyPower|Sprint> stub: i've seen a few braches so far that haven't highlighted merge issues
<lazyPower|Sprint> i think that *only* works if its got an issue when it's first pushed
<lazyPower|Sprint> it doesn't continue to track the behavior of the branch if its diverged from trunk
<stub> I can't really recall now.
<stub> I normally remember to push so it isn't an issue ;)
<lazyPower|Sprint> stub: i'm making it an even shorter distance than my colleagues have
<lazyPower|Sprint> http://paste.ubuntu.com/8532283/
<stub> lazyPower|Sprint: Do you have a JUJU_REPOSITORY set, with trusty/postgresql pointing to the right location? And you will need postgresql-psql too if you are testing with trusty
<stub> Not that running tests is reviewing ;) That is something you do before landing.
<lazyPower|Sprint> stub: we should add these to the dependency targets for making the tests. CI would fail miserably on this
<lazyPower|Sprint> it expects whatever bundletester executes, to do everything it needs to successfully run the tests. I see what you've got in  here pre-dates amulet
<stub> Yes, I need to switch to amulet. I think it supports most of what I need now, and I can probably add the rest.
<lazyPower|Sprint> oh i wasnt saying re-write in amulet, i just noticed you built a fixture harness
<lazyPower|Sprint> which is kind of neat tbh
<stub> No, rewriting in amulet is a goal I have. I will then be able to put my fixture for driving amulet into the amulet package.
<lazyPower|Sprint> stub: is postgresql-psql a pip package? I'm not seeing anything in apt to satisfy the dependency
<stub> lazyPower|Sprint: it is cs:precise/postgresql-psql
<lazyPower|Sprint> oh! charm dependency
<lazyPower|Sprint> ack
<stub> lazyPower|Sprint: I've avoided getting it pushed to trusty, since I'd rather embed it into the PostgreSQL charm for testing like amulet does with its sentinals.
<stub> Currently, it pretends to be a general purpose cli, but it really mainly used by my test suite, and ends up sucking for both purposes.
 * lazyPower|Sprint grins
<lazyPower|Sprint> I've got a few projects along that vein
<lazyPower|Sprint> yeah, i'm not getting any luck running the test suite
<lazyPower|Sprint> the code review portion looks fine, i dont see anything detrimental in this codeblock but i'm hesitant to rubberstamp the merge without the warm fuzzy feeling of getting test feedback
<stub> So we can fix that, or I can promise to confirm they pass before landing, or I can disable most of the tests ;)
<lazyPower|Sprint> option 3 sounds like a wash
<lazyPower|Sprint> option 2 is 'ok'ish but sets a precident
<lazyPower|Sprint> i'd rather go with option 1 if at all possible
<stub> Cause even if I disable the integration tests, I still have more tests than most charms ;)
<lazyPower|Sprint> and we <3 you for that stub
<lazyPower|Sprint> how do we go about fixing this? I'm not picking up where its failing, its like its not even attempting to make the deployment when i run make test (which is what bundletster is doing)
<lazyPower|Sprint> juju status shows i just have the bootstrap. my JUJU_REPOSITORY is exported, and i have the postgresql-psql charm in that local repo
<stub> I'm already landing almost all the work on this charm. Just it would be evil to review and land my own work :)
 * lazyPower|Sprint nods
<stub> You have the same error as before? juju deploy fails?
<lazyPower|Sprint> i want to make this process less painful for everyone involved, and i've got the time to do this. i don't have any other blocks today that i'm implicitly scheduled to be at
<lazyPower|Sprint> yeh
<lazyPower|Sprint> let me capture that output for you
<lazyPower|Sprint> http://paste.ubuntu.com/8532440/
<lazyPower|Sprint> there's a snippet of the error output
<stub> CalledProcessError: Command '['juju', 'deploy', '--config=/tmp/tmpyCZBRK/config.yaml', 'local:trusty/postgresql', 'postgresql', '-n', '1']' returned non-zero exit status 1
<lazyPower|Sprint> wait
<lazyPower|Sprint> i see it
<lazyPower|Sprint> its looking for trusty/postgresql
<lazyPower|Sprint> i have precise branched, and placed locally
<stub> Right
<lazyPower|Sprint> cheese and rice
<lazyPower|Sprint> so this should target the trusty charm, as well as the MP then
 * lazyPower|Sprint facepalms
<lazyPower|Sprint> what am i doing with my life
<stub> So lp:charms/precise/postgresql and lp:charms/trusty/postgresql are identical
<lazyPower|Sprint> opportunity to make the tests more intuitive, as precise tests will *always* fail in this context
<lazyPower|Sprint> but thats a matter for another day
<stub> make test SERIES=precise will override the default series in your environment
<stub> Otherwise, it will test the default series in your environment
<lazyPower|Sprint> hmm
<lazyPower|Sprint> i should make that a follow up item to investigate making test targets easier to work with, and sniffable from the path or something.
<lazyPower|Sprint> that way if set use env, otherwise look @ path and use that to determine the series
<stub> Yeah, it isn't pretty how I've got it
<lazyPower|Sprint> i dont think this is just your problem. we have the same issue in amulet
<stub> Maybe that will come out in the wash if I can get rid of this JUJU_REPOSITORY setup requirement?
<lazyPower|Sprint> you either define series=precise|trusty  when you declare amulet.deployment()
<lazyPower|Sprint> well, thats gone in amulet as well
<lazyPower|Sprint> but it doesn't support local: links, you pass it the name of the charm, and all others are defined with cs: url's
<lazyPower|Sprint> so you can't test 2 modified targets at once, which is a limitation (or feature, depending on how you look at it) of the amulet workflow
<stub> The automatic test environment could just set some environment variables that amulet or other test suites sniff. No need to overengineer for this case.
<lazyPower|Sprint> so for example, if these tests were amulet, it would just be d.add('postgresql') and thats enough for amulet to know to deploy the local copy of the charm.
<lazyPower|Sprint> it uses the name of the charm from metadata.yaml and matches on that
<lazyPower|Sprint> however, it looks more promising this time around now that i'm using the proper series
<lazyPower|Sprint> i think this bit cory_fu as well looking over the feedback chain
<stub> This is why we make bots run the tests ;)
<stub> Yeah
<stub> Cause who the hell still uses precise?
<stub> (I don't! I should get the default branch switched to trusty)
<stub> So generally, I'll see on average one test fail that works just fine when it is rerun alone, which I chalk up to the deployed services not being settled properly before the actual test is made
<stub> Although I've improved that an awful lot on this branch (and slower now too T_T)
<lazyPower|Sprint> well its still not deploying the tests
<lazyPower|Sprint> same feedback, regardless of SERIES= during make test, or moving everything to trusty
<stub> What does it tell you if you manually run the 'juju deploy' command, sans --config=/tmp/xxx
<lazyPower|Sprint> i think the core issue here is the tests arent architected to be satisfied when run via bundletester. (however i'm getting the same result with make test)
<lazyPower|Sprint> ooo well thats interesting
<lazyPower|Sprint> Added charm "local:precise/postgresql-10" to the environment. ERROR cannot add service "postgresql": environment is no longer alive
<stub> Has anyone packaged bundletester yet?
<lazyPower|Sprint> indeed!
<lazyPower|Sprint> its pip installable
<lazyPower|Sprint> pip install bundletester
<lazyPower|Sprint> looking into wth is going on with my local environment
<stub> Gah... pooping all over FS without apt
<stub> I'll put together a recipe build if I get bored ;)
<lazyPower|Sprint> stub: source ~/.venv/bin/activate
<lazyPower|Sprint> pip install bundletester
<lazyPower|Sprint> no longer polluting anything, or you can even pass --user to install to ~/.local
<stub> Where is ~/.venv coming from? Last I used virtualenv it was more manual
<lazyPower|Sprint> you initialize it
<lazyPower|Sprint> virtualenv ~/.venv
<stub> oh, cool
<lazyPower|Sprint> yeah, i do that when i want to test python modules - as its more of a 'bundler' experience
<lazyPower|Sprint> and i'm the unpopular rails guy that got hired :)
<stub> I've been using lxc, but had issues with juju inside lxc creating lxc containers...
<lazyPower|Sprint> yikes. yeah
<lazyPower|Sprint> it gets a bit angry about that
<lazyPower|Sprint> hav eyou looked @ our vagrant story? after a networkin bugfix lands it'll be a nice alternative if you want isolated 'fresh cloud image' style testing
<stub> Not yet. I'm backlogged on a couple of projects :-(
<stub> So many things to do, so little enthusiasm ;)
<lazyPower|Sprint> presently there's a route added that makes everything appear as if its coming from the gateway address - which unfortunately makes the postgresql ACL setting fail fantastically
<lazyPower|Sprint> hurray for packet rewriting
<lazyPower|Sprint> \o/
<lazyPower|Sprint> i think aisrael actually pointed that one out a few weeks ago
<tvansteenburgh> lazyPower: amulet actually does support local: urls now
<lazyPower|Sprint> tvansteenburgh: wat
<lazyPower|Sprint> why aren't we shouting from mountain tops about the awesomeness now?
<tvansteenburgh> i added that recently-ish?
<tvansteenburgh> shouting to whom?
<stub>   Analyzing links from page https://pypi.python.org/simple/lazr.authentication/
<stub>     Skipping link https://launchpad.net/lazr.authentication (from https://pypi.python.org/simple/lazr.authentication/); unknown archive format: .authentication
<stub> :-P
<tvansteenburgh> odd
<tvansteenburgh> is that with pip install?
<stub> Yes, but hmm.... might be mixing venv python with non-vm pip
<lazyPower|Sprint> tvansteenburgh: in charmhelpers, the config object is read only yeah?
<tvansteenburgh> nope
<tvansteenburgh> you can save arbitrary data in it
<lazyPower|Sprint> i have evidence that it gets saved to the archive, but its not coming back out when referenced
<stub> nah, carping on about --allow-external (which doesn't seem to be a supported option)
<tvansteenburgh> stub: valid option, is pip out of date?
<stub> The config object is writable, and should be saved on successful exit if you are using @hook or the services framework.
<tvansteenburgh> stub: e.g. https://github.com/juju-solutions/bundletester#installation
<lazyPower|Sprint> well we know its getting saved, but it doesn't appear to be getting loaded
<stub> This is from 'sudo apt-get install python-virtualenv; virtualenv ~/.venv; source ~/.venv/bin/activate; ~/.venv/bin/pip install bundletester'
<stub> fresh trusty
<stub> well... old trusty, but fresh venv
<tvansteenburgh> stub: and you are passing the --allow- options ?
<rbasak> marcoceppi_: around? Any news on removing the sentries from amulet?
<tvansteenburgh> lazyPower: want extra eyes on the config stuff?
<tvansteenburgh> rbasak: yeah we're still planning to do it :)
<rbasak> tvansteenburgh: OK, thanks. In the meantime I think I'll drive what I need from the shell instead. Do you have any idea how I might wait for a hook to finish firing after I trigger one with add-relation or destroy-relation or something?
<rbasak> How will amulet do that?
<tvansteenburgh> rbasak: there are some helpers for stuff like that in amulet, you may be able to reuse
<rbasak> I thought they currently relied on the sentries?
<stub> So --allow-external lazr.authentication and --allow-unverified lazr.authentication required it seems, which is not good for me to do on this box :-(
<lazyPower|Sprint> tvansteenburgh: not just yet, gimme another 5, doing env inspection
<tvansteenburgh> stub: it's the only way, unless you can convince someone from lazr to publish on pypi
<stub> Heck, I might have rights to do that.
<tvansteenburgh> omg you would be my hero
<lazyPower|Sprint> tvansteenburgh: its not loading the .charm_persistent_config
<tvansteenburgh> rbasak: memory is a little rusty on how it's done exactly, i'd just take a look and see
<lazyPower|Sprint> tvansteenburgh: appears related to scope possibly?
<tvansteenburgh> scope of?
<lazyPower|Sprint> @cached
<lazyPower|Sprint> its saving this in $CHARM_DIR/CONFIG_FILE_NAME
<lazyPower|Sprint> whatever that gets init'd to, in this case, its .persistent_charm_config
<lazyPower|Sprint> and the config() object is passed a dest_dir key/value, and it shows at end of install print
<stub> leonard.... I might still have his email around here somewhere...
<lazyPower|Sprint> when config() is loaded in config_changed() - it's missing a key/value that's present in the file.
<tvansteenburgh> lazyPower: i'm gonna come downstairs, having trouble following
<lazyPower|Sprint> ack
<stub> hookenv.config_data() uses the @cached decorator, so it is returning a singleton. It will only be loaded on process startup.
<stub> First invocation I mean
<lazyPower|Sprint> stub: it was an interesting omission from get_item()
<tvansteenburgh1> had to override keys() in Config, the data was there, you just couldn't see it in keys()
<stub> sounds like a bug. would never have suspected a bug ;)
<tvansteenburgh> haha
<stub> So I just got a automatic test run result, and the same problem as lazyPower|Sprint . The test suite is detecting the default series is trusty, and attempting to deploy that.
<stub> But the branch being tested is precise, so everything fails.
<lazyPower|Sprint> stub: yeah :( i'm working through this a layer at a time
<lazyPower|Sprint> trying to unwind complexity
<lazyPower|Sprint> i got it to deploy by moving it  to trusty, however install is failing consistently as its not deploying with +x perms on hooks.py - and i'm confused as to how that happened
<stub> easy fix for me is to have the runner set the SERIES environment, cause then I don't need to update anythng ;)
<stub> huh - hooks.py doesn't have the execute bit here
<stub> lazyPower|Sprint: Try refreshing the branch. I just pushed a change to the perms to that file.
<lazyPower|Sprint> stub: they're firing off like crazy atm i should have resutls shortly
<lazyPower|Sprint> stub: this branch looks good in CR format, the tests need more work at the end of the day
<lazyPower|Sprint> i'm going to make the associated comment on the MP and run the readme deployment tests. so long as everything passes standup i'm happy with whats here
<stub> ta
<lazyPower|Sprint> np, ty for taking me through the ropes of the unfamiliar territory of pgsql tests
<vtolstov> hello
<vtolstov> does somebody helps me and say: how can i provide website: interface: http and https ?
<jamespage> coreycb,
<jamespage> https://code.launchpad.net/~corey.bryant/charms/trusty/cinder/remove-missing/+merge/237439
<vtolstov> how can i specify array or what i need to do?
<jamespage> unit test failures
<coreycb> jamespage, doh
<jamespage> coreycb, :-)
<jamespage> coreycb, if you have time - https://code.launchpad.net/~james-page/charms/trusty/neutron-openvswitch/disable-security-groups/+merge/237933
<jamespage> we're running that on serverstack :-)
<coreycb> jamespage, sure I'll take a look - oh so it's being tested atm :)
<coreycb> jamespage, btw, does it make sense to add openstack-origin and openstack upgrade support to that charm?
<jamespage> coreycb, no because its a suboridinate; it would get upgraded by its principle
<coreycb> jamespage, ok, was wondering about that
<jamespage> however having it twiddle is config might be a good idea
<jamespage> right now its not an issue
<coreycb> ok
<coreycb> jamespage, so nova-compute presumably reinstalls packages for it?
<jamespage> coreycb, yes it will
<coreycb> jamespage, ok
<jamespage> coreycb, this would get tricky if the list of packages needed to change - but atm it does not
<jamespage> ...
<jamespage> so we are OK for now
<jamespage> not ruling out altogether tho
<coreycb> jamespage, ok, there are a decent amount of tempest failures after upgrade so I wanted to blame it on that.  I need to dig some more.
<jamespage> coreycb, icehouse -> juno?
<coreycb> yes
<jamespage> coreycb, there will be because of that neutron ipset bug I just uploaded a fix for
<jamespage> without it you can't spin up instances....
<coreycb> jamespage, ah maybe that's it.  yeah that would do it.
<jamespage> basically the port creation fails because neutron can't setup the security correctly
<roadmr> Hello folks, I had trouble getting nested lxc working with juju; meaning, juju deploying to a local provider, and having the deployed workload start some containers (inside the local-instantiated container)
<roadmr> anybody had success with this?
<vtolstov> anybody knows ?
<weblife> I am not able to bootstrap for some reason after I destroyed my last environment.  Boot strap will not load tool sets for my ubuntu distro 14.10.  My output: http://paste.ubuntu.com/8534392/
<stokachu> weblife: yea its known issue
<stokachu> weblife: they working on getting the certificate renewed i beleive
<weblife> stokachu: thank you
<weblife> stokachu: you hear about an eta?
<stokachu> weblife: yea try it now just got word they updated the cert
<stokachu> works in browser  but please test juju
<stokachu> weblife: any luck?
<weblife> trying now
<weblife> launching
<weblife> stokachu: launching
<stokachu> weblife: cool
<weblife> stokachu: success!
<weblife> has the issue with running juju on the local and running external environments been fixed?  I want to install the local version for some test.
<weblife> I just want the juju team to know I am using juju for my grad project.  Thanks for your work.
<zezom> is it possible to run juju on a maas controller server?
#juju 2014-10-11
<catbus1> zezom: yes. I have juju-core and maas-cluster-controller installed and running on the same server.
<zezom> catbus1, thanks :)
<zezom> catbus1, and just to be obvious you also run your virtuals on that same server?
<catbus1> virtuals?
<zezom> kvm's
<zezom> or containers
<catbus1> I have maas managing virtual nodes via kvm and juju to talk to maas to deploy charms
<zezom> I only have one server for the itme being and I need to run everything on that one server
<zezom> thanks
#juju 2015-10-05
<derekcat> core, dev: Hey guys, trying to get started with juju-quickstart and I'm getting a certificate verify failed error: http://paste.ubuntu.com/12691122/  Any thoughts on this?
<bdx> core, dev: Whats going on??
<bdx> I'm trying to give some devs overhere at darkhorse a juju demo, and get them bootstrapped.... I'm having issues and seem to be blocked .... can't seem to get anyone bootstrapped to a local env, or openstack provider....
<bdx> core, dev: "juju bootstrap -e local" and "juju-quickstart mediawiki-single" are failing on fresh trusty boxes
<bdx> core, dev: no luck with "juju bootstrap -e openstack" either......any insight would be greatly appreciated..... I seem to be stuck between a rock and a hard place here.... I will submit bugs for all of this, just thought I would run it by you guys on here first.... ~ thanks
<derekcat> core, dev: I'm the poor soul bdx is trying to help get started with juju-quickstart
<marcoceppi> o/ derekcat bdx
<marcoceppi> derekcat bdx is there some kind of proxy or firewall with ingress/egress filtering?
<bdx> marcoceppi: negative
<marcoceppi> the url, https://api.jujucharms.com/charmstore/v4/bundle/mediawiki-single/archive/bundle.yaml doesn't produce any SSL issues or warnings for me atm
<marcoceppi> bdx: can you load that URL in a browser to see if you get the same errors?
<marcoceppi> bdx: can you also lmk the version of juju (`juju version`) and quickstart (dpkg -l | grep quickstart) you're using?
<derekcat> marcoceppi: It works find in a browser for me?
<bdx> the browser resolves it just fine....no errors there...only with juju-quickstart...
<marcoceppi> as you can probably guess, that's not supposed to happen ;)
<derekcat> marcoceppi: Don't have dpkg - hang on
<derekcat> marcoceppi: Wait... ubuntu command?  Sorry, on OS X here
<derekcat> marcoceppi: https://bugs.launchpad.net/juju-quickstart/+bug/1503003
<mup> Bug #1503003: Cannot communicate with charm store <juju-quickstart:New> <https://launchpad.net/bugs/1503003>
<marcoceppi> derekcat: ahhh, clarity
<marcoceppi> derekcat: bdx said trusty machine so I assumed ubuntu
<marcoceppi> derekcat: did you brew install quickstart?
<derekcat> marcoceppi: correct
 * marcoceppi checks that version
<derekcat> marcoceppi: "brew install juju-quickstart" was the command
<marcoceppi> derekcat: that may be lagged by a few releases, I'm not 100% certain
<marcoceppi> I'm traveling atm, don't have access to my OSX box unfortunately to check there
<marcoceppi> 2.2.1 is the latest version and that's the version you should have gotten with homebrew
<aisrael> Looks like juju-quickstart is current in brew - 2.2.1
<derekcat> marcoceppi: Yeah 2.2.1 is what brew installecd
<marcoceppi> 2.2.1 is the latest version and that's the version you should have gotten with homebrew
<aisrael> derekcat: what's the quickstart command you're running?
<derekcat> marcoceppi, aisrael: juju-quickstart mediawiki-single
<aisrael> derekcat: and what environment/provider are you using (juju switch)
<marcoceppi> bdx: are you having other issues with Juju atm?
<bdx> marcoceppi: yes ...many unfortunately
<marcoceppi> bdx: well I'll see if I can help there while aisrael helps derekcat
<bdx> marcoceppi: I'm writing up a bug right now for broken local provider ....
<bdx> awesome
<marcoceppi> bdx: so, you can't bootstrap - what version of juju?
<bdx> marcoceppi: I cannot get juju to bootstrap local provider or openstack provider
<bdx> 1.24.6-trusty-amd64
<marcoceppi> bdx: alright, and if you bootstrap with `juju bootstrap --debug` what's the output?
<bdx> when I try to bootstrap on a fresh trusty box I get : http://paste.ubuntu.com/12691346/
<derekcat> aisrael: openstack
<bdx> local provider bootstrap^^
<aisrael> derekcat: ok, cool. I'm deploying that bundle to aws now. Let me see if I can recreate
<marcoceppi> bdx: what happens if you run `sudo initctl list | grep juju` - could you post the output?
<derekcat> aisrael: Okey
<bdx> marcoceppi: no output
<marcoceppi> bdx:  the juju-db, which is monogdb, doesn't seem to be starting properly, are there any log files in /var/log/upstart/juju-* ?
<bdx> marcoceppi: http://paste.ubuntu.com/12691411/
<marcoceppi> bdx: is that from /var/log/upstart/juju-db.log?
<bdx> juju-db-bdx-local.log ...yea
<marcoceppi> bdx: well, that's the problem. juju-db (mongodb) package is stacktracing like crazy
<marcoceppi> I've never seen that
<marcoceppi> bdx: can you dpkg -l | grep juju-db to get a version?
<bdx> marcoceppi: well theres the problem  .... no output
<marcoceppi> bdx: how about for `juju-local` instead of juju-db?
<marcoceppi> I can't really remember what the package name is
<bdx> marcoceppi: I probably don't have juju-db installed for some reason ....I didn't know it was a seperate dep....1.24.6-0ubuntu1~14.04.1~juju1
<marcoceppi> bdx: there may not actually be a seperate deb
<bdx> ok
<marcoceppi> bdx: it's juju-mongodb*
<bdx> 2.4.9-0ubuntu3
<marcoceppi> bdx: that's what I have, odd.
<marcoceppi> i'm using the 1.25-beta but I'll spin up a VM real quick to test
<bdx> marcoceppi: thanks!
<marcoceppi> bdx: do you have mongodb proper installed as well?
<bdx> yes
<marcoceppi> bdx: I wonder if that's why, maybe it's calling the wrong mongodb binary
<marcoceppi> which would explain the stacktrace
<marcoceppi> I'll try both scenarios
<bdx> totally
<bdx> ok
<bdx> my mongodb version - db version v2.4.9
<bdx> mongod*
<bdx> so to be explicit, I provision a new trusty/vagrant box with "vagrant init ubuntu/trusty64; vagrant up --provider virtualbox"
<marcoceppi> bdx: oh, we can probably make this easier for you, we have a vagrant box with juju already installed and configured for local
<marcoceppi> aisrael: where are the latest instructions forthat? the vagrant stuff?
<bdx> ssh into the box, and run "sudo apt update && sudo apt upgrade && sudo apt install juju juju-core juju-local"
<aisrael> derekcat: Unable to reproduce it on my end. I am on 10.11, though. Could you run it again with with --debug and pastebin that?
<bdx> is there anything else I should be installing to get local provider working?
<bdx> ok
<marcoceppi> bdx: that's the sum of it
<aisrael> marcoceppi: Current is still https://jujucharms.com/docs/1.24/howto-vagrant-workflow. I'm polishing the rewrite I started in DC this week.
<derekcat> aisrael: http://paste.ubuntu.com/12691475/
<derekcat> aisrael: There you go
<marcoceppi> bdx: ^ that vagrant image as the local provider already bootstrapped
<bdx> marcoceppi: I would like to know how to provision local provider none the less, it should work anyway right?
<aisrael> derekcat: ok, thanks. Give me a few. I think I might know what's up
<marcoceppi> bdx: totally should work
<marcoceppi> bdx: just trying to get you up in running while I suffer on airplane wifi to get this vm spun up
<bdx> lol
<bdx> awesome....thanks
<bdx> I am pumped on that....but more importantly, this has been plagueing me for a while, and I would like to see it fixed so as I can tell devs to bootstrap a local provider and not have to provide work arounds to all
<marcoceppi> bdx: I agree, and we're right here with you on that
<marcoceppi> bdx: this is x86 architecture, right?
<bdx> yes
<aisrael> Do you have curl installed via brew or using the built-in version?
<bdx> derekcat:^
<aisrael> derekcat: also, openssl
<aisrael> derekcat: I suspect `brew install openssl` will fix the problem. If that's the case, we might want to add that as a dependency to juju-quickstart
<derekcat> aisrael: I think it's the built in version
<derekcat> aisrael: let me try it, just a sec
<bdx> so ....here is a big missing step.....lets say a user wants to bootstrap a local provider......there seems to be alot of missing info on what 3rd party packages are needed
<derekcat> aisrael: Same error as the last paste-bin'ed >_<
<marcoceppi> bdx: so, for most people, it's just been apt-get install juju juju-local and you're good to go
<bdx> e.g. mongodb-server, lxc, juju-local.....I think a lot of people are really confused because they don't know to have the extra deps.....on top of that there are multiple versions and locations to get mongodb from ....if these are post juju install ops they should be doc'd up much better.....e.g. on a fresh trusty box after "sudo apt update && sudo apt upgrade && sudo apt install juju-local" I don't have lxc or
<bdx> mongodb installed
<marcoceppi> bdx: i just did that in a fresh trusty container without issue (though it's a LXD container on a linux laptop
<aisrael> derekcat: ok. Let me try to recreate from another machine.
<marcoceppi> bdx: it's not mongodb, it's juju-mongodb, and you should have lxc installed! bdx what does `dpkg -l | grep juju` show?
<bdx> http://paste.ubuntu.com/12691581/
<derekcat> aisrael: Okey
<marcoceppi> bdx: this is what happened when I ran your install line on a fresh trusty machine: http://paste.ubuntu.com/12691585/
<bdx> no lxc, thats for sure....you have lxc installed, and mongodb-server after "sudo apt install juju-local"???
<marcoceppi> what in the, how did you not get all the other dependencies
<marcoceppi> bdx: see my paste, it seems like your trusty vm isn't processing apt dependencies
<marcoceppi> bdx: did you install juju-local?
<bdx> totally....but then nothing is for that matter....I have tried this on openstack instances deployed with up to date cloud imgs....vagrant trusty box, intel nuc - fresh trusty server....same result on all
<marcoceppi> bdx: that is so bizzare.
<bdx> marcoceppi: following "vagrant init ubuntu/trusty64; vagrant up --provider virtualbox && vagrant ssh" I run "sudo apt update && sudo apt upgrade && sudo apt install juju-local"
<marcoceppi> bdx: huh, I've never used the apt command before, but it doesn't seem to have any flags that would interfere with package install
<marcoceppi> bdx: I'm going to try to ssh into my OSX machine at home. I'm not even sure if SSH is enabled or whatever but my next step is to replicate your setup
<bdx> marcoceppi: I just re-init'd my vagrant box, ran apt update, upgrade, and install juju-local and now have what look to be the correct pkgs e.g. lxc, mongodb
<marcoceppi> bdx: most of the core developers are at a sprint right now, where I'm currently traveling to, so if I can't get an answer before I land I'll be surrounded by the people responsilbe for the tool ;)
<marcoceppi> bdx: bizzare!
<bdx> I know...I'm tripping out here....
<bdx> o
<bdx> ok
<bdx> I'm now attempting a bootstrap to local
<marcoceppi> totally rad, I can get to my OSX machine
<marcoceppi> bdx derekcat: what version of OSX are you two on?
<bdx> 10.10
<marcoceppi> cool, I've got a Yosemite vm laying around with a fresh install
<bdx> marcoceppi: awesome!
<bdx> marcoceppi: I've a successfull local provider setup....I'll continue to monitor my installs and bootstrappings more closely to see if I can't reproduce more what I've been experiencing
<marcoceppi> bdx: awesome! glad you were able to sort that out, though it's totally weird what you were seeing
<marcoceppi> derekcat: I just installed juju-quickstart from homebrew on a clean yosemite machine, going to try to repro against aws
<marcoceppi> bdx: glad you got it working, becuase I have no idea how to install a dmg from the command line
<aisrael> derekcat: are you using stock python or installed from brew?
<derekcat> aisrael: should be stock python
<aisrael> derekcat: What's ruby -ropenssl -e "p OpenSSL::X509::DEFAULT_CERT_FILE" show?
<bdx> marcoceppi: on to the second issue: "juju bootstrap -e openstack"
<bdx> I've read through this: https://jujucharms.com/docs/devel/howto-privatecloud
<marcoceppi> bdx: yes, next on the docket!
<derekcat> aisrael: "/System/Library/OpenSSL/cert.pem"
<aisrael> derekcat: great, thanks! Closer to recreating this
<derekcat> aisrael:  ^_^
<bdx> marcoceppi: when I "juju bootstrap -e openstack --debug" : http://paste.ubuntu.com/12691759/
<marcoceppi> bdx: okay, yeah, it's a simplestreams problem
<bdx> it seems the default for tools url returns a 404
<marcoceppi> this is always a pain to setup for the first time, but after it's setup it's fine
<bdx> totally....I'm trying man
<bdx> this seems like an issue, the default tools url: https://streams.canonical.com/tools
<marcoceppi> bdx: minus your credentials, what does the openstack stanza in your environments.yaml fiile look like?
<marcoceppi> bdx: OH, you're on 1.22.6
<marcoceppi> I just realized that
<hermanbergwerf> I'm not sure if this is a noob question but does it make sense to use small single core (say 512mb RAM) VMs for state servers?
<marcoceppi> not bad, but the main archive lags behind the juju releases by a few minor releases because of the process to get updated releases into the main archive. We've ironed that out for releases going forward but there's still a but of a lag. While not directly related, you may want to add the stable juju ppa `sudo add-apt-repository ppa:juju/stable` which ill give you the latest juju 1.24.6
<marcoceppi> hermanbergwerf: that should be fine for the stateserver, the biggest limiting factor is typically diskspace before anything else
<marcoceppi> bdx: ^^
<bdx> arrrrg....ok
<bdx> omp
<marcoceppi> bdx: 1.22.6 isn't bad, it's just not latest, but it also won't directly solve your openstack issue
<marcoceppi> bdx: the environments.yaml stanza sans credentials would be a good start for working though that
<marcoceppi> derekcat: I was able to bootstrap and deploy the mediawiki-single against aws on a fresh 2.2.1 juju-quickstart on 10.10. The only difference is I had `bew install wget` run on that machine, which installs an updated version of openssl
<bdx> marcoceppi: ok...upgraded to 1.24.6.... here's my environments.yaml openstack stanza : http://paste.ubuntu.com/12691827/
<derekcat> marcoceppi: Hmm...  What openssl version are you running?  The only odd thing is that I'm on 10.10.4 instead of 10.10.5..  Just tried brew install wget but no difference with the juju-quickstart error
<marcoceppi> derekcat: very interesting
<bdx> marcoceppi: now with update to 1.24.6 : http://paste.ubuntu.com/12691857/
<beisner> coreycb, mind doing the honors?  gnuoy had +1'd, pending conflict resolution.  https://code.launchpad.net/~1chb1n/charms/trusty/swift-proxy/amulet-update-1508/+merge/273445
<coreycb> beisner, sure np
<coreycb> beisner, hmm I don't see the +1
<aisrael> marcoceppi: derekcat: supposedly this is an issue with urllib2/httplib, related to python not having ssl support built-in. Trying to find verification of that now.
<beisner> coreycb, oh it's in the comment
<beisner> if happy to merge == +1 that is
<coreycb> gotcha
<beisner> coreycb, one driveby on that.  all along, swift-proxy's make lint was --excluding hooks  o.O
<coreycb> beisner, ah convenient!
<beisner> lol
<beisner> `flake8 --exclude hooks unit_tests tests lib` not-ftw
<marcoceppi> aisrael: so odd, I'm not hitting it on my 10.10
<marcoceppi> aisrael derekcat: this is what I have installed via brew: http://paste.ubuntu.com/12691913/
<derekcat> marcoceppi aisrael: http://paste.ubuntu.com/12691936/
<derekcat> marcoceppi aisrael: There's mien
<derekcat> marcoceppi aisrael: There's mine*
<marcoceppi> derekcat: the only main difference I see is no charm-tools, while this shouldn't have a change in juju-quickstart, I wonder if installing it (you'll probably want them anways) adds an updated version of a dep which addresses this ssl issue
<derekcat> marcoceppi: No dice after installing charm-tools
<marcoceppi> guh
<derekcat> >_<
<derekcat> Ok, I'm going to reload Yosemite on here and then update to Capitan..  I was overdue for this anyway >_<  I'll jump back in a while and let you guys know what it's looking like.
<derekcat> aisrael marcoceppi: Thank you! Hopefully it's just some weird cruft..  BBL
<marcoceppi> bdx: fwiw, you can let derekcat know I have a vanilla 10.10.4 machine
<marcoceppi> bdx: did you create the imagestream metadata for your openstack?
<marcoceppi> I don't see it listed in the en vironments.yaml file
<beisner> coreycb, thx man
<bdx> maroceppi: derekcat is in the process of reimaging to 10.10.4 he will be back ASAP (5 mins)
<bdx> marcoceppi: yes I did...omp
<marcoceppi> bdx: I don't see it listed in the environments.yaml file for image-metadata-url
<bdx> marcoceppi: its a default....: http://paste.ubuntu.com/12692149/
<marcoceppi> bdx: on openstack installs that won't work because we don't know how your cloud is setup. it works fine on public clouds, since we have a partnership with them and publish images to those clouds
<marcoceppi> bdx: since you're the one creating images in glance, you know the image ids, etc
<marcoceppi> so you need to create metadata to tell juju where those images are
<bdx> I see
<marcoceppi> ie ubuntu-trusty = 7ad62ed4-6ba4-11e5-b31c-7b985e61fd10
<marcoceppi> etc
<marcoceppi> bdx: the top half of that page goes though how to generate the metadata, I'll be honest I haven't generated metadata for an openstack cloud in a while, lmk if you run into an issue. I distincly remember last time thinking "a lot of this could be automated"
<bdx> marcoceppi: yeah ....totally will...
<marcoceppi> bdx: I have about 30 mins of battery left, I'll be back online around 6pm PST
<derekcat> marcoceppi, aisrael: Hey guys I'm back
<marcoceppi> derekcat: hey, welcome back. aisrael stepped out to get some food I think and my battery is about 20 mins before failure
<marcoceppi> derekcat: I've got a chromebook, so I'll try to jump back online when this dies, but just an fyi
<derekcat> marcoceppi, aisrael: I'm about to pastebin another example, but it looks like it was cruft left on my system
<derekcat> marcoceppi, aisrael: http://paste.ubuntu.com/12692275/
<bdx> marcoceppi: so....after generating my metadata with "juju metadata generate-image -i e41dee50-872e-4f37-8aee-b090e020ddad -d /Volumes/WorkHD/Users/jbeedy/juju/juju_meta/"
<bdx> marcoceppi: I get this message: http://paste.ubuntu.com/12692278/
<bdx> marcoceppi: following which, I run "juju bootstrap --metadata-source /Volumes/WorkHD/Users/jbeedy/juju/juju_meta/"
<marcoceppi> derekcat: okay, cool, so SSL is resolved, you're just going to have to wait for bdx to get a working openstack configuration done up. derekcat you'll basically have to edit the ~/.juju/environments.yaml file to put things like creds, and horizon ip in
<derekcat> marcoceppi, aisrael: fresh imaged computer, did the ruby install of brew, xcode CLI tools, brew install juju-quickstart, juju-quickstart mediawiki-single
<derekcat> marcoceppi: okey cool!
<marcoceppi> derekcat: in the mean time you can sign up for http://developer.juju.solutions/ a program where we hand out AWS cloud time/credentials to those working on juju and charming
<bdx> marcoceppi: then get this output: http://paste.ubuntu.com/12692284/
<derekcat> marcoceppi, aisrael: Thank you so much for helping us troubleshoot this!  The fault is mine for having a crufty environment...
<derekcat> marcoceppi: Will do!
<marcoceppi> derekcat: no worries, glad we could get you going
<bdx> marcoceppi: I feel this is due to the default tools url returning a 404
<marcoceppi> bdx: interesting.
<marcoceppi> bdx: was that using --debug ?
<bdx> no, omp
<bdx> marcoceppi: http://paste.ubuntu.com/12692306/
<bdx> shoot....missed a key...oh well ...my bad
<bdx> its a dev env anyway
<marcoceppi> bdx: can you...huh, bdx this will fix it temporarily, battery is marching to zero, but this is NOT the right way to solve this
<bdx> oooooh nnnnnoooooo
<marcoceppi> bdx: add --upload-tools to your boostrap command
<bdx> marcoceppi: still no luck even with --upload-tools : http://paste.ubuntu.com/12692320/
<marcoceppi> bdx: can you paste the command you supplied?
<bdx> juju bootstrap --upload-tools --metadata-source /Volumes/WorkHD/Users/jbeedy/juju/juju_meta/ --debug
<marcoceppi> --upload-tools should override whatever juju is doing and upload a copy of the local compiled tools
<marcoceppi> beisner are you around?
<beisner> o/ hi marcoceppi
<marcoceppi> beisner: you probably have the most experience that's online right now, bdx is trying to setup access with juju to his openstack cloud
<marcoceppi> beisner: but he's getting an error about tools not found
<marcoceppi> bdx: what does `tree /Volumes/WorkHD/Users/jbeedy/juju/juju_meta/` show?
<beisner> marcoceppi, bdx - hi.  i've got ~15min before i've got to head out and about.  but can be back online later in the eve too.
 * beisner scrolls back, paste-hopping..
<bdx> beisner: ok, no worries, we can pick this up later if you like....
<bdx> marcoceppi: thanks for your help today
<marcoceppi> bdx: no worries, got a sec to pastebin the output of `tree /Volumes/WorkHD/Users/jbeedy/juju/juju_meta/`? I have an idea
<beisner> bdx, can you paste sanitized environments.yaml, and `keystone catalog`  and  `keystone endpoint-list`  and `neutron net-list` ?  that'll give me a good idea of the whats and wheres.
<bdx> marcoceppi: http://paste.ubuntu.com/12692371/
<bdx> beisner: omp
<marcoceppi> bdx: I think I see the problem
<bdx> marcoceppi: ?
<marcoceppi> bdx: create a tools directory in your juju_meta dir, and add releases and streams into that
<marcoceppi> bdx: you should only have images and tools in the juju_meta dir, then inside tools a streams and releases dir
<bdx> oooooh
<bdx> marcoceppi: niceeee
<marcoceppi> bdx: give that a wirl, my bouncer will remain online, but I've got maybe 5 mins of batter left
<marcoceppi> if you get a "launching instance" you should be pretty good to go from that point forward
<marcoceppi> *suspense*
<bdx> marcoceppi: http://paste.ubuntu.com/12692397/
<bdx> after making your modifications
<bdx> beisner: environments.yaml -> http://paste.ubuntu.com/12692390/
<marcoceppi> bdx: bugger.
<marcoceppi> bdx: you could try adding --agent-version=1.24.6 to your boostrap command, that *might* work
<bdx> error: flag provided but not defined: --agent-version
<marcoceppi> bdx: perfect, just was I expected *eye roll*
<marcoceppi> bdx: I've got to hibernate, I'll try to jump on with my chrome book
<bdx> beisner: keystone catelog -> http://paste.ubuntu.com/12692405/
<bdx> marcoceppi: ok
<bdx> beisner: keystone endpoint-list -> http://paste.ubuntu.com/12692430/
<bdx> beisner: neutron net-list -> http://paste.ubuntu.com/12692442/
<beisner> bdx, the file:/// thing i'm seeing strikes me as odd.  while that may work, it's different than my typical use.  ie:
<beisner>  simplestreams.go:429 read metadata index at "http://10.245.160.50:80/swift/v1/simplestreams/data/streams/v1/index.json"
<beisner> bdx, also, the environments.yaml says:  use network ***5f89.  but i don't see that net id in the paste.
<beisner> which isn't the current roadblock of course, but once the streams are happy, will prob be an issue
<bdx> beisner: gotcha....sorry about that .....the network id is currently up to date in my environments.yaml....that was an environments.yaml from a day or two ago...errr : network: 7af5fb70-7cde-4707-9564-2d770426b3de
<beisner> bdx, ok cool.  just mentally wiring up all the various assets ;-)
<bdx> totally.
<beisner> i see product streams endpoint.  are you using it to feed glance images?
<bdx> beisner: yea
<beisner> and simplestreams ... are they in blobs?
<bdx> beisner: should I use that as a tools url endpoint?
<bdx> beisner: errr....im not sure ....omp
<beisner> bdx, i've got to run, kiddo sports.  here is all i define in my environments.yaml.   simplestreams are in a blob (url above), and images are in glance.  everything else is automagic.  http://paste.ubuntu.com/12692489/
<bdx> beisner: awesome, I'll give you an update tomo on what I was able to figure out! Thanks for your help!
<beisner> bdx, yw.  see ya!
<marco-airplane> great, I crashed my bouncer
<marco-airplane> bdx: How's it going?
<aisrael> derekcat: glad to hear that fixed it!
<marco-airplane> There you are marcoceppi
<bdx> marco-airplane: whats up
<marco-airplane> bdx: any progress?
<bdx> still no luck bootstrapping openstack provider....I'm tearing down my dev stack atm to reprovision with some new network hardware....
<marco-airplane> bdx: ack
<marco-airplane> marcoceppi, silly bouncer
#juju 2015-10-06
<beisner> jamespage, gnuoy, wolsen  - even with happy a/ptr records, Precise-Icehouse n-c-c/next is broken when ssh migration auth is set.   if i substitute the n-c-c stable charm into next.yaml, the problem goes away.   break is at or after n-c-c/next rev188.
<beisner> https://bugs.launchpad.net/charms/+source/nova-cloud-controller/+bug/1480677
<mup> Bug #1480677: oslo.messaging.rpc.dispatcher AttributeError: 'Connection' object has no attribute 'connection_errors' <amulet> <openstack> <uosci> <nova-cloud-controller (Juju Charms Collection):New for gnuoy> <nova-compute (Juju Charms Collection):New for gnuoy> <rabbitmq-server (Juju Charms
<mup> Collection):New> <https://launchpad.net/bugs/1480677>
<beisner> bahh wrong link.
<beisner> jamespage, gnuoy, wolsen - https://bugs.launchpad.net/charms/+source/nova-compute/+bug/1500589
<mup> Bug #1500589: n-c-c/next fails cloud-compute-relation-changed when migration-auth-type set for Precise-Icehouse <openstack> <uosci> <nova-cloud-controller (Juju Charms Collection):New> <nova-compute (Juju Charms Collection):New> <https://launchpad.net/bugs/1500589>
<gnuoy> beisner, https://code.launchpad.net/~gnuoy/charms/trusty/nova-cloud-controller/lp1500589/+merge/273508 should fix Bug #1500589
<mup> Bug #1500589: n-c-c/next fails cloud-compute-relation-changed when migration-auth-type set for Precise-Icehouse <openstack> <uosci> <nova-cloud-controller (Juju Charms Collection):Confirmed for gnuoy> <nova-compute (Juju Charms Collection):Invalid by gnuoy> <https://launchpad.net/bugs/1500589>
<jamespage> thedac, https://code.launchpad.net/~james-page/charms/trusty/ceph/status - will work on osd and radosgw as well
<jamespage> that one works OK  - needs unit tests
<jamespage> gnuoy, beisner: I fixed up my crappy failing unit tests - apologies for that
<jamespage> insufficient patching
<jamespage> thedac, http://paste.ubuntu.com/12696040/
<jamespage> 'Unit is active' has become policy ....
<jamespage> thedac, and - http://paste.ubuntu.com/12696048/
<beisner> hi jamespage, gnuoy, thedac - http://paste.ubuntu.com/12696720/   i'm noticing that with next.yaml deploys, glance status stays in blocked 'Missing relations: messaging' - although the deploy completes successfully otherwise.  indeed there's no amqp relation.  do we need one?
<beisner> jamespage, ty for the quick n-c fix btw
<jamespage> beisner, we should probably just add that
<jamespage> its used for ceilometer notifications
<gnuoy> jamespage, I'm not sure I agree
<gnuoy> IS for one, don't use ceilometer
<beisner> jamespage, right, but i think 'blocked' will eventually really block us when we do more clever 'my thing is really deployed' logic.   i imagine deployer, amulet and juju itself will start watching that soon.
<gnuoy> jamespage, I think we should move it to an optional relation
<jamespage> gnuoy, beisner: ok so maybe the glance messaging relation should be optional
<jamespage> I'm easy either way
<beisner> so, optional relation then.
<beisner> ;-)   yeah that
<jamespage> but +1 on it being optional
<beisner> jamespage, gnuoy - either way, since defauly/next yamls include ceilo, should we relate it to glance in those bundles?
<gnuoy> I would think so
<beisner> i haven't dug into the code.  is glance aware of whether/not ceilo is present
<beisner> ?
<jamespage> beisner, nope
<jamespage> beisner, +1 to adding it to the bundle tho
<jamespage> thedac, gnuoy: how does this look - http://paste.ubuntu.com/12696803/ ?
<jamespage> that's ceph-osd
<gnuoy> jamespage, it looks wonderful, I'd like to hug it
<jamespage> gnuoy, unit tests of the atlantic and mps by the end of today
<jamespage> gnuoy, its basic but I think its all fairly sound
<gnuoy> kk, thanks
<beisner> ok, i'll adjust o-c-t bundles.
<beisner> fyi, raised for tracking:  bug 1503272
<mup> Bug #1503272: workload status:  amqp relation should be optional <openstack> <uosci> <glance (Juju Charms Collection):New> <https://launchpad.net/bugs/1503272>
<apuimedo> jamespage: is it possible to limit the amount of units of a charm?
<thedac> jamespage: wrt, ceph, fantastic.
<thedac> wrt glance +1 to moving messaging to optional. In fact if no one has done it already I will
<beisner> ha!  /me just saw gnuoy hug a pastebin
<mhall119> jcastro: jose: is there an official way to request a charm be written?
<mhall119> the developer portal could really benefit from using Haystack, but I don't see a charm for it in the charm store
<thedac> gnuoy: care to review? https://code.launchpad.net/~thedac/charms/trusty/glance/optional-amqp/+merge/273542
<gnuoy> thedac, approved, would you mind doing the landing?
<thedac> will do
<gnuoy> ta
<thedac> I'll wait on osci amulet before doing so
<beisner> gnuoy, fyi, you'll be getting 2 amulet tests for kicking the bot while he/she/it was already running an amulet test on behalf of n-c-c  ;-)
<gnuoy> Can I kick it ? It seems I can
<beisner> new commit = retest
<beisner> so the amulet result that is there, was for rev 199
<beisner> there will be a new set for rev 200
<gnuoy> beisner, so I think it fixes the bug you found either way
<beisner> lol
<beisner> yes thanks!
<gnuoy> np
<beisner> coreycb, thedac - i'm on a normalize-makefiles mission for *os-charms.  noticing that some charms aren't lint checking the actions/ dir.  when i enable that, some of them have unused uuid import in actions/openstack_upgrade.py
<beisner> and when i remove that import, unit tests start to fail with 'actions/openstack_upgrade.py'> does not have the attribute 'uuid'
<coreycb> beisner, I'd mark that low priority for now
<thedac> agreed
<beisner> coreycb, thedac - if i move forward, we'll either have failing lint checks or failing unit checks.
<beisner> i can just ignore the actions dir for now ?
<coreycb> beisner, yeah I'd vote for that and a bug
<thedac> Yes, I would not make the change (yet) that checks the actions directory
<beisner> fwiw, some do have it enabled, though for the pause/resume actions
<beisner> be aware ^
<beisner> so that'll force the next guy to make it right ;-)
<beisner> on those ones
<beisner> ok, squashing actions/ dir normalization wrt lint checks for now, will add affected charms to bug/1503340
<mup> Bug #1503340: actions dir not checked for lint;  unused import when checked;  unit test fails when lint is resolved. <openstack> <uosci> <ceilometer (Juju Charms Collection):New> <https://launchpad.net/bugs/1503340>
<beisner> ie. if it's already checking actions/, it will keep checking, but i won't force it to just yet.
<coreycb> beisner, sounds good, thanks
<beisner> coreycb, yw
<beisner> jamespage, gnuoy, thedac, coreycb - while i'm batch-updating, any input on changing maintainers in metadata.yaml to something generic?   it'd be an easy change now.   atm, we have a mix of yolanda, adam g, jamespage.
<thedac> Do we have an openstack-charmers@ mail alias? or something like it?
<thedac> ls
<thedac> :)
<beisner> thedac,
<beisner> -rw-rw-r-- 1 ubuntu ubuntu 467 Oct  6 14:09 metadata.yaml
<beisner> ;-)
<thedac> wow, my system is lagging
<beisner> not sure re: mail alias.  that's what we should probably do i think
<beisner> coreycb, can you check if you just rcvd a test email from my gmail acct to the openstack charmers list?
<coreycb> beisner, I'm not seeing anything
<ddellav> beisner, coreycb thedac i can take a look at that uuid and lint issues with the upgrade actions just as soon as im done with what im working on here.
<beisner> coreycb, hrm.  no reject or awaiting-moderation reply on my end.  i wonder if the public can send to that?
<beisner> https://launchpad.net/~openstack-charmers
<beisner> jamespage, gnuoy, thedac, coreycb - i'm going to ignore the maintainers field for this batch.  but let's figure that out before release and i can push another batch to update all of the oscharms.
<mattyw> lazypower, ping?
<lazypower> mattyw, pong
<mattyw> lazypower, hey hey - I'm not sure who else to ping - but if you're busy feel free to move me along...
<mattyw> lazypower, wondering why my merge request isn't appearing the queue https://code.launchpad.net/~mattyw/charms/trusty/mongodb/mongodb-backup/+merge/273544
<lazypower> mattyw, its only an hour old, its likely to be pending an intgest into review.juju.solutions
<mattyw> lazypower, ok cool, so be patient grasshopper is the message :)
<beisner> o/ lazypower, mattyw - our openstack testing bot cares about mongodb (because of ceilometer) and it does some automated testing on proposals there.  i've added a pre-review review ;-)
<beisner> https://code.launchpad.net/~mattyw/charms/trusty/mongodb/mongodb-backup/+merge/273544
<mattyw> beisner, thanks very much - I've just seen it, I'll make those changes
<beisner> mattyw, cool, thanks!
<mattyw> beisner, there was already an action - who's implementation I basically followed, would you like me to fix that up as well?
<lazypower> beisner, <3 :)
<beisner> mattyw, /me looks back...
<beisner> mattyw, lazypower - indeed, there was 1 existing action, outside the coverage of lint checks.
<beisner> http://bazaar.launchpad.net/~charmers/charms/trusty/mongodb/trunk/annotate/head:/actions/perf
<mattyw> beisner, I'm eoding now, anything you'd like me to do just shove a message on the pr
<mattyw> beisner, if I don't see anything new I'll go ahead with the linting fixes
<beisner> mattyw, thanks.  i'll pull a few heads together and one of us will advise there.
<mattyw> beisner, perfect
<mattyw> night all
<beisner> o/ coreycb - look out!  here's that batch to normalize makefiles and amulet test dependencies @ http://paste.ubuntu.com/12698366/    as discussed, did not address upgrade actions lint or maintainer bits yet.
<coreycb> beisner, ok
<cholcombe> is charmhelpers build for python 2.x?
<cholcombe> coreycb, i believe you set pulls are welcome? :) https://code.launchpad.net/~xfactor973/charm-helpers/status_set-enum
<cholcombe> coreycb, oh wait that has some junk from another branch in there.  i need to clean it up
<coreycb> cholcombe, hmm dunno if I said that or not, but they are!
<cholcombe> :)
<coreycb> cholcombe, actually I can't land to charm-helpers though
<cholcombe> coreycb, who lands that stuff?
<coreycb> cholcombe, for openstack, typically gnuoy, jamespage or dosaboy
<cholcombe> coreycb, oh i meant for charm-helpers
<coreycb> cholcombe, right I was referring to openstack related charm-helpers code
<cholcombe> gotcha
<cholcombe> coreycb, alright i'll add one of them to the review.  thanks!
<marcoceppi> cholcombe: there's a group of charm-helper maintainers that currate that
<cholcombe> marcoceppi, even better
<bdx> core, dev: hey whats going on everyone? I'm wondering which part of cinder (api, volume, scheduler) provides the image-service interface?
<bdx> core, dev: I'm thinking it must be cinder-api..
<bdx> or scheduler, or both...
<marcoceppi> bdx: not sure I understand your question
<marcoceppi> bdx: the cinder charm provides image-servce which connects to glance
<bdx> marcoceppi: In the case you deploy cinder in a decoupled fashion, with separate volume units ..... you end up with different units running different cinder services...
<marcoceppi> bdx: right, but we don't have a decouple charm (this is where my knowledge of the openstack charm ecosystem begins to fade)
<bdx> marcoceppi: https://jujucharms.com/cinder/trusty/29
<marcoceppi> bdx: beisner coreycb or ddellav might be able to help more
<marcoceppi> bdx: OIC
<marcoceppi> it's probably api
<marcoceppi> but that's a guess
<bdx> marcoceppi: how could I preform introspection into this?...there are a few different relations made to cinder that I have questionabout
<marcoceppi> bdx: well, a better readme would be my guess
<bdx> beisner, coreycb, ddellav, marcoceppi: so .... I have 3 availibility zones ....in order to use cinder across all 3
<bdx> I need to run cinder volume service in the 3 separate locations
<bdx> in which I configure each of the 3 cinder volume services to include a config-flag that specifies each zone
<bdx> such that "cinder availibility-zone-list" shows all three zones are enabled
<bdx> what I am trying to figure out is what relations should be made from the cinder component's interfaces to what other charms
<marcoceppi> bdx: well it probably wouldn't hurt to connect them all to glance, tbh
<marcoceppi> bdx: let me see if I can find a bundle with cinder spun off
<bdx> marcoceppi: here's a small script of what I'm doing to give an idea https://gist.github.com/jamesbeedy/6429af503c3581ba9e7e
<bdx> I have most of the relations dialed...I feel like I should just test the last few to see what happens
<marcoceppi> bdx: whoa that is, whoa
<bdx> ha
<marcoceppi> bdx: I may be able to help make that script a little more managable
<bdx> oh...do tell
<marcoceppi> bdx: what does charmconf.yaml have in it?
<bdx> marcoceppi: http://paste.ubuntu.com/12699829/
<marcoceppi> bdx: yeah, you basically want a bundle
<JerryK_> Hi! Please is there anyone able & willing to help me with maas & juju networking? Trying to figure out how to expose juju openstack services to public network
<marcoceppi> JerryK_: I can certainly try
<JerryK_> thanks! The thing is I have maas controller connected to internet + private network. Nodes are connected to private network + public. Everything I install through juju is exposed on the private network but I would like to move some of those services to the public one ...
<marcoceppi> JerryK_: what maas version do you have?
<JerryK_> Or maybe I just misunderstood how to make "the right" network layout ...
<JerryK_> 1.8.2
<JerryK_> and juju 1.24.6
<JerryK_> e.g. can't find any doc on how to work with networks in juju. I see the networks section with cird as provided from maas in juju status, but that's all
<marcoceppi> bdx: okay, took forever, https://gist.github.com/marcoceppi/1029c03170a35ca48a10 I got lazy and didn't add all the relations, but we have a way to model large scale deployments without having to write scripts and waits
<marcoceppi> bdx: doesn't asnwer your question about relations, but it should be something that's easier to maintain over time
<marcoceppi> JerryK_: so networking in maas is still a bit lite, do you have multiple interfaces in the services deployed with juju? or is it just a single private network?
<marcoceppi> JerryK_: fwiw, 1.9 of MAAS makes networking and storage a first class tennant, it'll be so much easier in the next release
<bdx> marcoceppi: that is awesome. thanks!
<JerryK_> marcoceppi: yeah, heard about that. Code "1.9.alpha" still feels a little bit unstable but I'll give it a try
<JerryK_> marcoceppi: juju goes directly for the network managed by maas. The node has two nic - one for private(maas managed) and second public network
#juju 2015-10-07
<Merlijn_S> Hi! Anyone here has experience with charm compose?
<apuimedo> lazypower: gnuoy`: jamespag`: https://code.launchpad.net/~celebdor/charm-helpers/midonet/+merge/273678
<apuimedo> IIUC I have to get this merged and then sync it to the development branches of the Openstack charms
<apuimedo> then send the necessary patches for those, right?
<mattyw> beisner, ping?
<jamespag`> apuimedo, yah - looking now
<jamespag`> apuimedo, lgtm - landed
<apuimedo> thanks!
<apuimedo> jamespag`: much appreciated ;-)
<apuimedo> jamespag`: gnuoy`: lp:~openstack-charmers/charms/trusty/nova-cloud-controller/next /charm-helpers-hooks.yaml points to lp:charm-helpers branch
<apuimedo> how am I supposed to make the sync?
<apuimedo> I guess I can't just change /charm-helpers-hooks.yaml to the
<apuimedo> nevermind...
<apuimedo> I should look closer, it is already there :P
<apuimedo> so I'll do a `make sync`
<jamespag`> apuimedo, you got it
<apuimedo> it's sad because I had already done it
<apuimedo> I for some reason thought there was a lp:charm-helpers/next for a second
<beisner> mattyw, pong
<mattyw> beisner, hey ther, just regarding the mongodb review
<mattyw> beisner, I decided to move most of the common code in dump and restore into a python file - that way I can make some proper unit tests out of it
<mattyw> beisner, just wanted to check you were ok with that approach
<beisner> mattyw, yep, even better imho :)
<beisner> mattyw, marcoceppi aisrael ^ ?
<aisrael> mattyw: I think that's fine, as long as you're not symlinking to it. More testing is always better.
<beisner> coreycb, on a landing streak i see - thanks for that.  fyi, i've raised this re: the unrelated fails in n-c-c and n-c, and will be looking at adding api checks ahead of the gimme-an-instance tests.  bug 1503701
<mup> Bug #1503701: amulet test for new instance needs resilience <amulet> <openstack> <uosci> <nova-cloud-controller (Juju Charms Collection):New> <nova-compute (Juju Charms Collection):New> <https://launchpad.net/bugs/1503701>
<beisner> coreycb, and for n-g - bug 1503706
<mup> Bug #1503706: amulet test for create network needs resilience <amulet> <openstack> <uosci> <neutron-gateway (Juju Charms Collection):New> <https://launchpad.net/bugs/1503706>
<coreycb> beisner, ok.  are those fixes needed for the current amulet failures?
<beisner> coreycb, they're races
<beisner> coreycb, under load, the overcloud isn't quiesced, and the tests blindly say:  create an instance;  or create a network.
<coreycb> beisner, btw reviewing your mps got me wondering if we can figure out a way to have some global files in c-h or wherever our code lands after c-h (e.g Makefile, tests/00-setup, etc)
<beisner> coreycb, ie we have passing tests at the same revs
<beisner> coreycb, yeah ditto.  there are a couple of exceptions, where os api client isn't in helpers, but only in the local tests.  ceilometer / ceilometer-agent are two.
<beisner> beisner, the tests.yaml file will still need to be present for bundletester to parse, so i'm not sure we have a c-h way out of maintaining that.
<beisner> coreycb, definitely wanna centralize that somehow though
<coreycb> beisner, do you need a hand with those bug fixes?
<beisner> coreycb, i think a check-wait-loop-timeout api check just ahead of the create-stuff.  much like this:
<beisner> http://bazaar.launchpad.net/~charm-helpers/charm-helpers/devel/view/head:/charmhelpers/contrib/openstack/amulet/utils.py#L485
<beisner> nova and neutron need that sort of love
<beisner> i've not started into it, but yeah if you've got  cycles and want to tackle that, that'd be awesome
<beisner> instead of waiting on an object's status, we need to try/except and api connection and query.
<coreycb> beisner, lemme know the priority, cause yeah that cycles thing :)
<beisner> coreycb, atm it's just a nagging race, but as you saw in that batch, causes false failures, which causes us to re-run and tie up a test slave for another 2hrs.  and we may still have a race false/fail even with that.
<beisner> coreycb, it's a priority from my perspective, because of that (tying up test resources 2 or 3 times to eek out a solid run).
<coreycb> beisner, ok
<jamespage> thedac, gnuoy`:
<jamespage> https://code.launchpad.net/~james-page/charms/trusty/ceph/status/+merge/273635
<jamespage> and
<jamespage> https://code.launchpad.net/~james-page/charms/trusty/ceph-osd/status/+merge/273634
<thedac> jamespage: great. I'll take a look today
<thedac> \o/
<jamespage> thedac, I've tested them out - they work OK
<thedac> cool
<jamespage> thedac, the only bit that's missing is the device detection in ceph
<jamespage> I'll work on that today - but feedback appreicated
 * thedac nods
<jamespage> beisner, hey - can we get dnspython installed on the unit testing envs for osci:
<jamespage> http://paste.ubuntu.com/12702241/
<beisner> jamespage, oo yuck the unit test is trying to apt-get install something?
<jamespage> beisner, well the charm-helper is trying to install dnspython cause it can't import it
<jamespage> its on of the try: import: catch: install re-import things
<beisner> jamespage, jenkins@juju-osci-machine-7:~$ apt-cache policy python-dns
<beisner> python-dns:
<beisner>   Installed: 2.3.6-3
<beisner> somethine else must be amiss
<jamespage> beisner: python-dnspython
<beisner> jamespage, ok now, reran, passing results @
<beisner> https://code.launchpad.net/~james-page/charms/trusty/ceph-osd/status/+merge/273634
<beisner> https://code.launchpad.net/~james-page/charms/trusty/ceph/status/+merge/273635
<jamespage> beisner, awesome
<jamespage> thanks
<beisner> jamespage, yw
<jamespage> beisner, probably cause we've not really got unit tests in the ceph charms this popped up when I wrote some :-)
<jamespage> when does cholcombe typically appear?  want to ensure I'm not stepping on his toes with my status proposals.
<beisner> jamespage, yeah, osd had 0.  so, yay!
<jamespage> beisner, my new tests are surgical - but made me feel better...
<apuimedo> jamespage: for the nova-cloud-controller changes
<apuimedo> when proposing for merging
<apuimedo> I should taget the branch: lp:~openstack-charmers/charms/trusty/nova-cloud-controller/next and put you as a reviewer, right?
<apuimedo> I'm getting a message
<apuimedo> jamespage: http://paste.ubuntu.com/12705281/
<apuimedo> I'm getting the same for gnuoy`
<Merlijn_S_> Quick question: the charmhelpers.core.host.chownr function doesn't chown the supplied dir itself, only its children. Is this intentional? Seems strange to me..
<mattyw> beisner, marcoceppi I've updated the mongodb actions suggestions https://code.launchpad.net/~mattyw/charms/trusty/mongodb/mongodb-backup/+merge/273544
<apuimedo> jamespage: gnuoy`: coreycb: why is nova-compute/next charm-helpers-hooks.yaml pointing to lp:~hopem/charm-helpers/add-rbd-cache-config-support ?
<apuimedo> shouldn't lp:~hopem/charm-helpers/add-rbd-cache-config-support have proposed a merge againts lp:charm-helpers and then nova-compute/next keep pointing to lp:charm-helpers?
<gnuoy`> apuimedo, a mistake, should lp:charm-helpers
<apuimedo> gnuoy`: so... Can I fix it and send it with my merge request?
<gnuoy`> apuimedo, I'll fix it it was probably my error at review time
<apuimedo> gnuoy`: ok, I wait for that to sync and put my changes
<apuimedo> gnuoy`: did you see my question above about the target branch for nova-cloud-controller and the message I get from launchpad?
<gnuoy`> apuimedo, otp, but will look
<mattyw> marcoceppi, does UOSCI need updating with a charmhelps that supports the latest action functions? http://paste.ubuntu.com/12705419/
<gnuoy`> apuimedo, charm-helpers-hooks.yaml should be fixed
<gnuoy`> apuimedo, yes, targey lp:~openstack-charmers/charms/trusty/nova-cloud-controller/next
<apuimedo> gnuoy`: it asks me if I want to nominate you
<apuimedo> as you don't currently have the permission to view branches
<apuimedo> (sounds like launchpad malfunction to me)
<gnuoy`> apuimedo, does it? I'd suggest not nominating anyone then it lands on the review queue for all to see
<apuimedo> ok
<apuimedo> gnuoy`: jamespage: so here you go https://code.launchpad.net/~celebdor/charms/trusty/nova-cloud-controller/midonet/+merge/273717
<beisner> mattyw, it's actually failing because the unit test (as non-root) is attempting to apt install a pkg.
<beisner> mattyw, generally speaking, unit tests shouldn't actually "do" things to the host running the tests.  this looks like a case where you'll need to prep a venv in the makefile ahead of the tests running.
<mattyw> beisner, yeah  it only does that if it can't import the action_get function. It's messy but that seems to be the accepted way of doing that kind of thing
<mattyw> beisner, ah - good call
<beisner> ie. satisfy dependencies before the test is fired
<mattyw> beisner, what about on a unit - we'd need to make sure the deps were up to date there as well?
<beisner> mattw - so in this charm (and all of the openstack charms), the necessary bits from charmhelpers are sync'd into hooks/charmhelpers.
<beisner> mattyw, `make sync`
<beisner> that of course has pros/cons
<mattyw> beisner, that only works in the hooks directory - I could have a similar one in the actions directory as well?
<beisner> mattyw, true.  which i believe is why in the swift charm, hooks/charmhelpers moved to lib/ so both actions/ and hooks/ could consume.  but that's more symlink foo!
<mattyw> beisner, I could do an extra sync location for actions - and add a TODO ;)
<beisner> mattyw, i'll defer to the primary maintainers of the mongodb charm on that (and all of this really).   there is how-we-do-it-now vs. how-we-would-like-to-see-it-done.   i can tell ya all about how we do it now ;-)
<mattyw> beisner, which charm was that?
<beisner> mattyw, swift-proxy and swift-storage ('next' dev branches), which will be part of the 15.10 openstack charm release
<beisner> https://code.launchpad.net/~openstack-charmers/charms/trusty/swift-proxy/next
<beisner> https://code.launchpad.net/~openstack-charmers/charms/trusty/swift-storage/next
<mattyw> beisner, perfect thanks - I was looking the current ones
<beisner> mattyw, so something will have to be either duplicated or symlinked, or pip installed.  each of the approaches have pros/cons
<mattyw> beisner, I'll go with symlink and wait for comments
<mattyw> beisner, done, let's see what trouble that causes ;)
<coreycb> gnuoy`, I pushed a new keystone version for action-managed upgrades, after a slight scare.  somehow I overwrote my branch on lp without using --overwrite, but luckily enough lp has history.
<beisner> fyi gnuoy`, dosaboy - wrt the n-c-c amulet test fails;  known race opportunity [ that's my positive spin thedac ;-) ] ... raised @ bug 1503701 -- unrelated to your proposed changes.
<mup> Bug #1503701: amulet test for new instance needs resilience <amulet> <openstack> <uosci> <nova-cloud-controller (Juju Charms Collection):New> <nova-compute (Juju Charms Collection):New> <https://launchpad.net/bugs/1503701>
<thedac> haha
<mattyw> beisner, thanks! great idea, sensible diff size now
<beisner> mattyw, woot!  still violates the antisymlink thing, but so does the rest of the charm.
<mattyw> beisner, rules are meant to be broken ;)
<beisner> mattyw, so are symlinks
<beisner> ba da bing
<mattyw> beisner, I'd be happy with haing to sync locations - but I'm also happy to go with majority verdict :)
<apuimedo> lazypower: ping
<apuimedo> mbruzek: ping
<mbruzek> apuimedo: Hello!
<apuimedo> mbruzek: Ahoj
<apuimedo> how's it going?
<mbruzek> apuimedo: I am doing well.  And you?
<apuimedo> I got some amulet failure when syncing up the charm helpers
<apuimedo> and putting some template change
<apuimedo> http://paste.ubuntu.com/12706944/
<apuimedo> do you think it could be a flaky test of that the charm was not ready for the sync?
<tvansteenburgh> looks like whatever server the novaclient is pointing at is broken, or was
<tvansteenburgh> i don't know anything about this though
<tvansteenburgh> beisner, jamespage ^
<beisner> mbruzek, apuimedo - that looks like n-c-c.  if so, it is a race in the test, working on it now actually.   bug 1503701   tldr: the deployed test cloud wasn't completely open for business at the moment the create instance routine fired off.
<mup> Bug #1503701: amulet test for new instance needs resilience <amulet> <openstack> <uosci> <nova-cloud-controller (Juju Charms Collection):New> <nova-compute (Juju Charms Collection):New> <https://launchpad.net/bugs/1503701>
<apuimedo> beisner: oh, thanks. I didn't know about this issue
<apuimedo> beisner: could you retrigger the test for the merge proposal once you fix it, then?
<beisner> just started seeing it.  it generally happens when the rest of the undercloud has elevated load.
<beisner> apuimedo, you would have to rebase, after finish, propose, and get the test updates landed.
<beisner> s/after/after i/
<apuimedo> beisner: I'll have to check how to rebase in bzr. I'm still fighting with it and wishing I had git :P
<beisner> apuimedo, i plan to have a proposal up tomorrow
<apuimedo> beisner: do you have some estimation for the bug fix?
<apuimedo> ah, good
<mbruzek> apuimedo: beisner is the expert on osci but if you have any amulet questions I can help
<apuimedo> mbruzek: perfect, thanks ;-)
<apuimedo> beisner: what's your tz, so I can tell when to ping you about future osci issues
<beisner> apuimedo, us central time
 * beisner goes to find lunch
<apuimedo> cool
<apuimedo> beisner: bon appetit
<mbruzek> apuimedo: You can ping me for osci problems too, I just might have to consult with beisner for the tougher ones
<apuimedo> ok
<apuimedo> mbruzek: thanks!
<apuimedo> beisner: the issue also happened with  nova-compute/next http://paste.ubuntu.com/12709505/
<beisner> apuimedo, yep, that bug is listed for both n-c and n-c-c
<apuimedo> cool
<apuimedo> ;-)
<george_e> Is it still necessary to bundle "charmhelpers" with a charm? (I'm specifically thinking of Precise here.)
<george_e> Or is there a better way?
#juju 2015-10-08
<george_e> \quit
<george_e> Heh - oops.
<blahdeblah> Hi all - is it possible for a subordinate charm to get information about the relations of its parent, and if so, are there any good recent examples of this I can look at?
<Icey> hey, is it possible for a charm to be subordinate to two others, basically we want to require either this charm or that one but not both
<jamespage> thedac, gnuoy`: basic but functional - https://code.launchpad.net/~james-page/charms/trusty/percona-cluster/status/+merge/273825
<gnuoy`> jamespage, fantastic, thanks
<jamespage> gnuoy`, gotta love jetlag ;-)
<gnuoy`> you having jetlag is certainly working out well for me!
<jamespage> lol
<jamespage> thedac, re https://code.launchpad.net/~thedac/charms/trusty/neutron-gateway/workgroup-status/+merge/273623
<jamespage> commented, tl;dr drop all the status stuff related to db's - its not required and is no-op even if optionally added
<jamespage> we need to drop the relations but thats different work imho
<mattyw> beisner, ping?
<Icey> any idea why changing the lxc default configuration for ip range would break juju with the local provider?
<Icey> I changed the bridge IP fof lxcbr0 from 10.0.2.0 to 10.0.3.0 and now charms hang at Waiting for agent initialization to finish
<Icey> wait, I think I missed a config
<thedac> jamespage: thanks, I'll get that done.
<jamespage> thedac, awesome
<thedac> jamespage: gnuoy: For neutron-gateway making neutron-plugin-api required results in blocked even when related. It doesn't seem NeutronAPIContext is "registered" so it doesn show as a complete context. ideas?
<kwmonroe> cory_fu: tvansteenburgh:  to sentry a unit in an amulet test, "self.deployment.sentry['tomcat'][0]" is preferred over "self.deployment.sentry.unit['tomcat/0']", right?
<cory_fu> Yes
<kwmonroe> thx
<cory_fu> The latter can fail if the charm has been deployed before in the environment, which can happen with how bundletester runs the tests and resets the environment between files within a charm or bundle test
<tvansteenburgh> after a reset, unit numbers *should* start over at 0, although machine numbers won't
<tvansteenburgh> even so, not hardcoding unit numbers is best
<gnuoy`> thedac, did you solve your neutronapi question?
<thedac> gnuoy`: no I have not. Would you have time for a quick hangout?
<thedac> gnuoy`: jamespage: when you have a chance https://code.launchpad.net/~thedac/charms/trusty/neutron-gateway/workgroup-status/+merge/273623
<jamespage> thedac, looking
<thedac> thanks
<jamespage> thedac, I thought that NeutronAPIContext was a base class for one of the other contexts for the charm, but it would appear not
<jamespage> thedac, its subtle but how about changing NeutronGatewayContext to inherit from NeutronAPIContext
<jamespage>         api_settings = NeutronAPIContext()()
<thedac> ah, ok, let me test that out.
<jamespage> would then become a call to its super calss
<thedac> I'll run with that
<jamespage> thedac, super(NeutronGatewayContext self).__call__()
<jamespage> thedac, I think that is neater
<thedac> ok
<thedac> jamespage: fyi, the neutron-gateway changes are up.
<jamespage> thedac, ack
<thedac> Can I get a second opinion on https://code.launchpad.net/~thedac/charms/trusty/hacluster/workload-status/+merge/273889 ?
<thedac> coreycb: I merged your heat MP. Looking at the keystone action managed upgrade MP now. Let me know if there are more.
<coreycb> thedac, thanks!
<coreycb> thedac, I think that's good for now
<thedac> cool
<jamespage> gnuoy`, thedac: I commented on 2/3 midonet charms and the tintri charm - landed 1 midonet charm
<jamespage> thedac, looking at you proposal for neutron-gateway now
<thedac> thanks
<jamespage> thedac, +1 but I'd like to see an amulet +1 before its landed
<thedac> ack
<thedac> I ran a single amulet test successfully but waiting for OSCI is a good idea here
<coreycb> jamespage, thedac: I pushed updates to swift-proxy
<thedac> in the queue
<thedac> ls
<thedac> irssi should have a check for 'ls' and stop the insanity
<thedac> coreycb: lint and unit_test problems with swift-proxy.
<blahdeblah> Trying again with my question from yesterday: Is it possible for a subordinate charm to get information about the relations of its parent?  If so, are there any good recent examples of this?
<thedac> blahdeblah: I don't *think* so, but I will defer to juju devs
<jamespage> blahdeblah, no that's not possible - a sub only knows about its own relations
<blahdeblah> Thanks guys - so bottom line is if I want to do that I need to add relations for both the parent and the subordinate.
<blahdeblah> ^ s/\.$/?/
<thedac> correct. You can pass information back and forth between the primary and subordinate via their relation
<jamespage> thedac, ok so what's left on the list?
<thedac> https://code.launchpad.net/~thedac/charms/trusty/hacluster/workload-status/+merge/273889
<thedac> jamespage: Ihttps://code.launchpad.net/~gnuoy/charms/trusty/cinder-ceph/workloadstatus/+merge/273861
<jamespage> thedac, ok looking at hacluster
<thedac> I'd like your opinion on the approach in https://code.launchpad.net/~jjo/charms/trusty/swift-proxy/fix-multiple-devices-per-node_lp1479938/+merge/266462
<thedac> jamespage: I have to head out for a bit. I will check back in a couple of hours.
<thedac> and this one https://code.launchpad.net/~gnuoy/charms/trusty/ceph-radosgw/workloadstatus/+merge/273834
<jamespage> thedac, landed hacluster - looked like a good v1 ;-)
<thedac> cool
<thedac> ok, I'll be back a bit later
#juju 2015-10-09
<coreycb> thedac, thx I pushed swift-proxy with tests fixed up
<thedac> coreycb: did that get pushed up? I don't see it.
<coreycb> thedac, it's pushed up now, along with swift-storage and ceilometer-agent
<apuimedo> jamespage: answered your comments in https://code.launchpad.net/~celebdor/charms/trusty/nova-cloud-controller/midonet/+merge/273717
<coreycb> gnuoy`, I pushed to the swift-storage status mp again if you want to take a look
<gnuoy`> coreycb, will do, thanks
<coreycb> gnuoy`, swift-proxy is also ready for re-review
<coreycb> gnuoy`, thanks
<jamespage> gnuoy`, morning
<jamespage> hey apuimedo - will look shortly
<apuimedo> thanks jamespage
<apuimedo> https://code.launchpad.net/~celebdor/charms/trusty/neutron-api/midonet/+merge/273772 as well
<jamespage> apuimedo, erm I can seen any response on either of those two merges?
<jamespage> can't rather
<apuimedo> mmm
<apuimedo> let me check
<apuimedo> jamespage: sorry about that, forgot to press save. They should be visible now
<jamespage> lol
<apuimedo> I'm a launchpad noob :P
<jrwren> is there a debugging guide for juju. Something I can follow when juju won't deploy new services or machines and the debug-log output seems to end 10 hours ago?
<apuimedo> jamespage: I submitted the fix to charm-helpers to add the missing relation (neutron-api/next will need to be synched again)
<jamespage> apuimedo, does MidonetContext get used elsewhere? otherwise lets avoid a ch-sync task and just drop it into the contexts for neuton-api itself
<apuimedo> jamespage: at the moment only in neutron-api
<jamespage> apuimedo, direct into neutron-api then is fine
<apuimedo> jamespage: right. I thought about that too
<apuimedo> so I'll put it there :P
<apuimedo> thanks
<apuimedo> jamespage: anything about the other comments I made?
<jamespage> apuimedo, still looking
<apuimedo> good, thanks
<jamespage> apuimedo, responded on nova-cc - I don't think config gets passed to all templates
<apuimedo> mmm, I tested this one like a month ago
<jamespage> apuimedo, NovaConfigContext
<apuimedo> but I'll test it again
<jamespage> does the same thing that you'll need
<apuimedo> jamespage: https://code.launchpad.net/~celebdor/charms/trusty/neutron-api/midonet/+merge/273772 should address your comments
<jamespage> apuimedo, I'm not sure the relative pathing for key file open calls will work
<apuimedo> I'll try it
<jamespage> hooks run from the toplevel of the charm
<jamespage> apuimedo, it might be better to use the charm_dir function to fully specify it
<jamespage> thats in hookenv I think
<apuimedo> let's see
<apuimedo> I'll read that method up
<jamespage> os.path.join(charm_dir(), 'files/midonet.key')
<apuimedo> ok
<apuimedo> I didn't know about that one ;-)
<jamespage> apuimedo, a unit test to cover the changes in neutron_api_utils.py would help validate that one way or the other
<jamespage> that should be pretty trivial
<apuimedo> what should it cover? installation of the source?
<jamespage> apuimedo, it should validate that given plugin == midonet and source being one of mem/midonet, that the right calls are made to add_source
<jamespage> just exercise the code paths appropriately
<jamespage> apuimedo, also my comment on the neutron.conf changes was wrong
<jamespage> the change to the juno file disables loadbalancer and firewall for anything other that midonet
<jamespage> I think it needs to reflect the way you did it in the kilo neutron.conf file
<jamespage> if midonet
<jamespage> else
<jamespage> ....
<apuimedo> jamespage: it should just be != "midonet"
<apuimedo> probably a typo
<jamespage> apuimedo, not sure but right now it will regress other plugins
<apuimedo> yes
<apuimedo> I'll push the fix now
<jamespage> apuimedo, I'm assuming that
<jamespage> +service_provider = LOADBALANCER:Midonet:midonet.neutron.services.loadbalancer.driver.MidonetLoadbalancerDriver:default
<jamespage> 531	
<jamespage> does not have juno context then?
<apuimedo> I think we did not have our own load balancer then
<apuimedo> I'll double check
<apuimedo> jamespage: fix pushed
<jamespage> apuimedo, I really would like to see unit tests for the code additions to this charm - the new context and the source configuration specifically
<jamespage> apuimedo, also can i ask why the username and password is not passed over the midonet relation, and is provided by config instead?
<jamespage> I may have already asked that at some point in the past..
<jamespage> can't remember
<apuimedo> jamespage: It used to be fetched from a charm called midonet-repository that you told me to remove and make the repo configuration be config and not a relation
<jamespage> apuimedo, oh - sorry being dumb - that's not the password to access midonet services - just the repos...
<apuimedo> midonet-api is not the owner of the repo information
<jamespage> apuimedo, I woke up to early
<apuimedo> yes
<jamespage> sorry
<apuimedo> no problem
 * jamespage <- jetlagged
<apuimedo> where did you fly?
<robbiew> Seattle
<apuimedo> nice
<rbasak> Is there any way to set Juju's default placement? So I don't need to say --to lxc:0 all the time with the manual provider, for example?
<jose> jcastro: ping
<Icey> is there a way to say that the other side of this relation has to be finished before running?
<jose> Icey: could you give a little more context?
<Icey> working on a charm that uses the elasticsearch relation
<Icey> the elasticsearch charm manages UFW
<Icey> our charm is trying to create an index in Elasticsearch but sometimes fails if it's relation-changed hook runs before elasticsearch's has opened up the ports
<jose> probably because there's no way for your charm to reach elasticsearch?
<jose> try adding a check to see if the port is open before running
<Icey> yeah, was just hoping to have more than an infinite loop to wait on the port
<jose> Icey: would you mind doing a paste of your relation-changed hook? I wanna check something real quick
<Icey> https://github.com/cholcombe973/ceph-metrics-collector/blob/master/hooks/hooks.py#L192
<Icey> the part that's failing is in https://github.com/cholcombe973/ceph-metrics-collector/blob/master/hooks/hooks.py#L181
<Icey>  / https://github.com/cholcombe973/ceph-metrics-collector/blob/master/hooks/hooks.py#L127
<jose> Icey: I think what you need to do there is check if elasticsearch has already set values for the relation, don't just run. your charm is running even though ES is not ready
<cholcombe> jose, for relation_get is the unit the ip addr of the ES unit?
<jose> huh?
<cholcombe> jose, i'm also working on Icey's problem
<jose> I know, but what did you mean with your last question?
<cholcombe> well you said grab the values that ES is setting with relation_get
<jose> yes. elasticsearch will set relation values once the unit is ready for a relaiton
<jose> relation*
<cholcombe> jose, ok great
<jose> tbh, I don't know what those values are since I don't know the interface, but they should be documented somewhere
<jose> otherwise, checking the charm source will do
<cholcombe> yeah i'm looking
<cholcombe> jose, if that relation value isn't set can i return 0 and expect to have my changed hook get called again or is it a one shot deal?
<jose> cholcombe: that's right! when the values change it will count as a relation change, meaning that the relation-changed hook will run again
<cholcombe> jose, ok cool
<cholcombe> i wonder why juju is calling me before the relation values are set
<jose> because a relation-joined is followed by a relation-changed
<cholcombe> i see
<bdx> How's it going everyone?? Are there are any plans in the works for an official repo for layers and interfaces? ....If not, an official general location for interfaces and layers would be sweet, and would also make the development and usage workflow much nicer:-)
<bdx> juju-solutions, core: I am getting this error http://paste.ubuntu.com/12726889/ when trying to login with launchpad to http://interfaces.juju.solutions/
<bdx> I assume it is because I am not in a group with the correct permissions
<asanjar> does anyone know what this error means http://paste.ubuntu.com/12727715/
<lazypower> asanjar, o/
<lazypower> asanjar, that looks like something went awry with the storage manager. what version of juju?
<asanjar> hi lazypower juju --version ==> 1.24.6-vivid-ppc64el
<lazypower> asanjar, Storage got a massive revamp in 1.25. If you're not opposed to running from the devel ppa, that error message *should* go away.
<asanjar> okay will do
#juju 2015-10-10
<jamespage> thedac, beisner, gnuoy`: I have MP's up to support swift 2.5.0 and resolve the installability problem with trusty-liberty
<jamespage> thedac, gnuoy`: I've put up reviews for ceilometer and openstack-dashboard for status reporting
#juju 2015-10-11
<bdx> happy sunday everyone! Can someone provide insight as to where and when the /etc/network/interfaces file is generated...?
<bdx> Thanks
<bdx> core, dev, openstack-charmers, openstack, charmers:^^
#juju 2016-10-10
<anrah> Hi,
<anrah> As https://jujucharms.com/docs/devel/howto-privatecloud for 2.0 is still being rewritten is that Devel version something I can use to test 2.0 on my private openstack?
<jamespage> junaidali, bbaqar: morning
<jamespage> junaidali, so regarding xenial branches; we really don't need todo that any longer.
<jamespage> the charm store does not injest from launchpad bzr branches any longer, you direct publish to the charm store, decoupling VCS from publication altogether
<jamespage> are your trusty and xenial charm versions identical?
<jamespage> if so we can use the series in metadata feature and just have a single charm version for both - this is what we do across all other openstack charms in master branch (that will be released this week).
<junaidali> Hi jamespage, xenial charms have a few minor changes
<jamespage> junaidali, is it possible to add that into the charm, rather than having two different charms?
<jamespage> i.e. conditional code for xenial vs trusty?
<jamespage> junaidali, we've found it much easier to support a single codebase for multiple releases, rather than having lost of differnent branches for different versions
<junaidali> yes, we can do that.
<SimonKLB> hey, anyone could tell me how to automatically attach a resource when running an amulet test?
<holocron> "juju status" with lxd/local just hangs with no response, I've seen this before when lxd containers are not starting properly, but this does not seem to be the case now
<holocron> what's my first course of action for debugging, or gathering enough info for a bug report?
<holocron> "juju switch default" hangs, most juju commands are hanging
<rick_h_> holocron: try adding --debug to the commands and see if anything jumps out
<rick_h_> holocron: I thuoght there was a bug along those lines /me goes to look
<holocron> rich_h_ thanks, yes that's obvious -- juju.api is looping on a dial to wss://x.x.x.x:17070/api
<rick_h_> holocron: and are you able to reach that address?
<holocron> i can ping it yes
<holocron> it's a local lxd container, i'll exec bash into it
<rick_h_> holocron: then perhaps the controller went down for some reason? can you ssh/connect to the controller and check the logs there?
<rick_h_> holocron: maybe try to bounce jujud?
<holocron> yeah, jujud seems to be thrashing a bit
<holocron>  17330 root      20   0 49.690g 5.858g   7000 S   0.3 15.0 154:16.67 jujud
<holocron> it's not listening for that wss connection
<holocron> rick_h_ : jujud isn't a service it seems, should i restart the whole container, or is there a preferred method for just restarting the daemon?
<rick_h_> holocron: sudo service juju<tab>?
<rick_h_> holocron: let me look for the real name
<holocron> yeah, i'm not finding any service for jujud
<rick_h_> holocron: there you go, jujud-machine-0
<holocron> ah thank you.. i guess the tab completion isn't working in lxc exec bash
<holocron> rich_h_: were you able to find that bug? still hanging up on me here
<rick_h_> holocron: not yet
<rick_h_> holocron: did it restart for you?
<holocron> rick_h_: okay, yeah it did restart, but i'm getting mongodb connection errors in the log
<rick_h_> holocron: can you peek at the /var/log/juju/machinexxxx
<rick_h_> holocron: ah ok, so let's try restarting that then as well
<rick_h_> sudo service juju-db restart
<rick_h_> and then re-kick jujud after the db is restarted and see what's up with that
<rick_h_> holocron: is this on a laptop or something that's shutdown/back up often?
<rick_h_> holocron: or some other system?
<holocron> rick_h_ : it's running on a mainframe, it hasn't come down since it was working Friday night
<holocron> Oct 10 15:16:23 juju-84a348-0 systemd[1]: Failed to start juju state database.
<holocron> got a mongodb backtrace
<rick_h_> holocron: ah ok. well that's starting to look like a root cause
<rick_h_> holocron: and yea, exising bugs don't line up to this so treading new ground
<holocron> par for the course for me :P
<rick_h_> holocron: oh lucky you?
<rick_h_> holocron: what's the traceback from mongo look like?
<holocron> rick_h_: i'll paste it out in a moment
<holocron> rick_h_: https://gist.github.com/vmorris/7750b8f9d3dfaa14238df39f7628ea3a
<holocron> hmm, that doesn't have the trace in it :P
<holocron> sec
<holocron> rick_h_: please see revision 2 on that gist
<vmorris> something called wiredtiger reporting an i/o error when attempting to read data
<vmorris> i didn't run out of storage :/ using zfs on the localhost as per the normal setup
<vmorris> there's really nothing out of the ordinary here except i'm running s390x arch
<vmorris> let me trace back through the log and see if i can determine a first fault
<vmorris> rick_h_: could this be related? https://gist.github.com/vmorris/f81217815059c6fc748eaba8cc1b5318
<rick_h_> vmorris: sorry, issed you changed nick/continued the conversation
<rick_h_> vmorris: wiredtiger is the mongodb engine
<rick_h_> vmorris: looking at the gist
<vmorris> rick_h_: apologies for the nick switch
<rick_h_> vmorris: all good
<rick_h_> vmorris: sorry, I'm not a mongodb expert to know how to decipher that
<rick_h_> vmorris: please feel free to file a bug with the notes on arch, etc. It sounds like mongodb bit on something and that caused Juju to bail out on you.
<vmorris> rick_h_: this seems to be the case, yeah. i've had the juju controllers flaking out on me a bunch over the past few weeks, but i've chalked it up to my fooling with things. i've reset to working clean state a bunch in the past few weeks and finally got to a good position to uncover a few things
<bdx> mbruzek, lazyPower: sup sup
<bdx> cory_fu: sup
<bdx> lets say I want to re-gen certs, and re-render my nginx config when fqdn config is changed
<bdx> would this be a good way of accomplishing ^ -> https://github.com/jamesbeedy/charm-documize/blob/master/reactive/documize.py#L200-L207 ?
<bdx> if I remove line #136 -> https://github.com/jamesbeedy/charm-documize/blob/master/reactive/documize.py#L119-L136
<bdx> If #136 is removed, would my charm be requesting a new cert every time the hook environment executes?
<vmorris> arosales: ping https://bugs.launchpad.net/juju/+bug/1632030
<mup> Bug #1632030: juju-db fails to start -- WiredTiger reports Input/output error <juju> <juju-db> <mongodb> <s390x> <juju:New> <https://launchpad.net/bugs/1632030>
<Prabakaran_> Hello Team, Getting timeout issue while commissioning the node in MAAS 1.9, Logs are pasted in here  http://pastebin.ubuntu.com/23301741/ could somebody help me on this?
<vmorris> Prabakaran_: have you confirmed that you've imported the boot images? https://maas.ubuntu.com/docs/install.html#import-the-boot-images
<Prabakaran_> vmorris: I have imported images in the maas ui
<vmorris> Prabakaran_: just have to ask, you did confirm that they were imported before trying to commission?
<vmorris> It looks like you're trying to commission KVM virtual machines?
<Prabakaran_> ya that was the 1st step i did before commissioning...
<Prabakaran_> ya correct .. i am commissioning the vrish nodes
<arosales> vmorris: thanks for the bug report :-)
<vmorris> arosales: cheers :) i've torn the mess down and am about to redeploy the openstack-on-lxd bundle
<arosales> Seems it goes into a defunct state
<arosales> Will take a look and see if I can reproduce
<vmorris> same
<arosales> vmorris: Ubuntu 16.04 with updates I presume
<vmorris> VERSION="16.04.1 LTS (Xenial Xerus)"
<vmorris> arosales: I have a slightly modified version of the rabbitmq-server charm that i'm deploying as well to try and get more visibility into https://bugs.launchpad.net/charms/+source/rabbitmq-server/+bug/1563271
<mup> Bug #1563271: update-status hook errors when unable to connect <landscape> <openstack> <rabbitmq-server (Juju Charms Collection):Confirmed> <https://launchpad.net/bugs/1563271>
<arosales> Thanks for pitching on rabbit bug
<vmorris> heh, sure i'll surface something if it works for me.. but it's purely selfish motivation =D
<bdx> arosales: are you guys responsible for the percona-cluster charm?
<bdx> beisner, arosales: this hardcoding is a piss off -> http://bazaar.launchpad.net/~openstack-charmers-next/charms/precise/percona-cluster/trunk/view/head:/hooks/percona_utils.py#L64
<bdx> beisner, arosales: can we fix that hard coding of the pkgs so we can use latest version of percona?
<beisner> hi bdx, what specifically are you needing to do?
<arosales> Yes, specifically the openstack charmers
<bdx> beisner: I need features that were introduced in 5.7
<beisner> bdx, fyi, that charm's source of truth is https://github.com/openstack/charm-percona-cluster.  i have a pending request out to deprecate the old LP branches, which may not always be in sync with the latest bits.
<bdx> beisned: aaah nice, thx
<beisner> bdx, on which version of ubuntu ?
<bdx> beisner: xenial
<bdx> beisner: is there a problem with using percona repos by default?
<bdx> https://www.percona.com/doc/percona-server/5.7/installation/apt_repo.html
<beisner> bdx - we only test with ubuntu repos and the cloud archive pockets.
<bdx> ahh
<beisner> bdx, even yakkety has 5.6 atm so that's as bleeding edge as we get.  https://launchpad.net/ubuntu/yakkety/+source/percona-xtradb-cluster-5.6
<beisner> bdx, all of that said, /me looks at that repo... :)
<bdx> darn ... so -> http://paste.ubuntu.com/23304600/
<bdx> ^ shows that 5.7 is xenial, just not for percona?
<bdx> I see
<bdx> ok
<beisner> right, mysql might be ahead of percona-cluster packaging
<bdx> beisner: percona-cluster charm has a config for 'source' and 'key', if the apt package for percona  wasn't hard coded,  I could then set 'source', 'key', and hypothetically 'package' and get the latest right?
<bdx> I might just rig something up in my personal namespace for the time being - just needed the scoop on what the deal is so I know how best to move forward
<bdx> beisner: thx
<beisner> bdx, that hard-codedness is indeed a bit of crack, but was necessary crack whilst the packaging was in limbo around the vivid:wily timeframe.  with both wily and vivid now eol, we can pull that out after 16.10 release (too late in the feature freeze to do that now).  also i'd be all for making sure the charm can use the upstream repos.  but .. someone will have to get pretty clever to try to predictively know what the package name is going to be, given
<beisner> that the version number is in the package name :-/
<MrDanDan> hi guys
<vmorris> hi
<spaok> is there a python module or layer I can use to pull info from MAAS 2.0 in my charm? I want to get at the network space info
<smgoller> Hey all, I'm trying to expand our juju-deployed openstack cluster, so I ran "juju add-machine". After it being stuck in a pending state for a number of minutes, I manually ssh'd into the machine and found that cloud-init had tried to run something involving 'curtin' and it failed because curtin wasn't installed. I also can't find any references to anything trying to install it. Has anyone run into a similar problem?
<spaok> are you using custom images or something?
<smgoller> nope
<smgoller> so it looks like cloud-init is downloading some script from maas with a self-extracting archive that contains curtin inside it
<smgoller> hm.
<spaok> smgoller: never seen curtin fail to install in our deployments, I always thought it was baked into the ubuntu images
<smgoller> i'm hesitant to blame juju on this yet. we've got some network wonkiness
<spaok> smgoller: if you do apt update on that server do you get errors?
<smgoller> it looks like intermittent dns resolution problems
<smgoller> but the dns server is fine, so i think it's a layer 3 thing
<smgoller> spaok: so I'm less concerned about juju at the moment and trying to make sure the infrastructure is solid
<spaok> usually a good first step :)
#juju 2016-10-11
<diddledan> I see juju 2.0 rc2 is available and the updated charms store requires at least beta16. Xenial seems to currently carry beta15 so I'm unable to progress. Should I file an issue against ubuntu/xenial to request a SRU for juju to be updated?
<Randlema1> diddledan: i doubt they didn't think of that..
<mgz> diddledan: you can use the ppa for a newer juju version on xenial for now
<mgz> we're in the process of getting rc3 back to xenial
<diddledan> cool
<Randlema1> I also have a question! Regarding Nagios and JuJu. We added the Nagios charm and the NRPE charm. Added relations. But nagios refuses to monitor our disks :(...
<Randlema1> Anyone who could shed a light on that? What could we be doing wrong.
<mgz> diddledan: see the release announcement: <https://lists.ubuntu.com/archives/juju/2016-October/007989.html>
<diddledan> are we sure there is a package in the ppa? Candidate: 2.0~beta15-0ubuntu2.16.04.1
<diddledan> that's from apt policy with the ppa enabled
<diddledan> oh, /devel;
<diddledan> sorry, I have the juju/stable ppa . silly billy me :-p
<mgz> :)
<mgz> Randlema1: no idea without going off and looking at the nagios charm I'm afraid
<Randlema1> mgz: np :)
<lazyPower> mornin #juju o/
<vmorris> mornin
<Randlema1> evening
<Spaulding> Hello folks!
<Spaulding> https://gist.github.com/pananormalny/35d2ae4f1651145ff0d8cfcd4a196ad5
<Spaulding> Could someone look at it and tell me why this reactive script is running in a loop?
<lazyPower> Spaulding a pastebin or github would be helpful
<Spaulding> lazyPower: haha you joined too late ;)
<Spaulding> https://gist.github.com/pananormalny/35d2ae4f1651145ff0d8cfcd4a196ad5
 * lazyPower shakes a fist at connectivity issues
<Spaulding> hmm, maybe I need to change check_call to call?
<magicaltrout> Spaulding: you should be able to move all that apt stuff to layer.yaml i'd have thought
<marcoceppi> Spaulding: yeah, if you use the apt layer, all that complexity goes away
<marcoceppi> Spaulding: there's also a lot of other things to simplify
<magicaltrout> Spaulding: also
<magicaltrout> you don't set
<magicaltrout> sme.installed
<magicaltrout> so it will always run the install hook
<Spaulding> ok
<Spaulding> understood
<Spaulding> marcoceppi: it's my first charm
<Spaulding> so still learning how to do it properly..
<marcoceppi> Spaulding: I'll give you a few updates
<marcoceppi> Spaulding: to help get you on the right track
<Spaulding> \o/
<Spaulding> ok, i've read something more about layer-apt
<Spaulding> looks promising...
<marcoceppi> Spaulding: yeah, I think you'll like the result
 * marcoceppi continues to update your gist
<marcoceppi> Spaulding: is GID important?
<marcoceppi> Spaulding: or is it just so you can add a user to that GID
<marcoceppi> Spaulding: as in, could you let the system autoassign for does the software expect an explicit GID mapping
<Spaulding> i mean... those numbers are not that important
<Spaulding> but users need to be assigned to specific groups
<marcoceppi> right
<marcoceppi> gotchya
<Spaulding> and gid => 5000 so they'll not collide with other groups... rather future-proof hack...
<Spaulding> users got also high uid >= 5000
<marcoceppi> Spaulding: yeah
<marcoceppi> Spaulding: just about done
<Spaulding> :)
<marcoceppi> Spaulding: https://gist.github.com/marcoceppi/2eb1cf988f7f64eb535b290b1de29632
<marcoceppi> Spaulding: there's a lot going on in there, I tried to leave comments where I could
<marcoceppi> Spaulding: there's a lot ot unpack, and I have to go for a bit, but lots of others here should be able to help, I'll try to check back in a bit
<magicaltrout> marcoceppi: can you write my charms please?
<Spaulding> marcoceppi: thank you! :)
<marcoceppi> Spaulding: no problem, by the looks of it, you'll probably find the apache layer useful as well
<Spaulding> marcoceppi: yeah, i guess
<Spaulding> hmm... it's hard to google some layers...
<Spaulding> marcoceppi: I can only see layers related to apache... but not strictly layer-apache..
<marcoceppi> Spaulding: yeah, my bad. There's a search bar top right on http://interfaces.juju.solutions/ but it appears there's only apache-php and apache-wsgi
<marcoceppi> no base apache layer
<Spaulding> exactly! but still I think I can use apache-php... even some parts of it
<Spaulding> cause I'll need to manage some files
<marcoceppi> Spaulding: yeah, when I create sharable layers, I usually do everything in my layer, then I find the parts that I see as reusable and strip time out into it's own layer
<marcoceppi> we're long overdue for an apache layer, much like how we have an nginx layer
<Spaulding> so you're working on apache layer?
<Spaulding> i mean - ubuntu team...
<marcoceppi> Spaulding: not at the moment
<marcoceppi> Spaulding: if I come across a project that needs an apache layer I'll probably do one, but most of the stuff I charm up uses naginx
 * marcoceppi signs off for a bit
<marcoceppi> Spaulding: oh, well, I think we can take the apache-php layer and instead make it just apache and then make apache-php use the apache layer and php layer and just merge them there but the php layer hasn't been published yet (I'm still getting that one ironed out)
<Spaulding> Unfortunatelly I can't use nginx
<Spaulding> not right now...
<Spaulding> not with this project (suexec, perl etc.)
<marcoceppi> yeah, no worries
<marcoceppi> I'm just commenting on why we don't really have a base apache layer, at least why I havent created one
<Spaulding> but still, because of you now I see how juju and layers works
<Spaulding> cause juju docs - hmm... they don't have enough information
<lazyPower> Spaulding - filing bugs against http://github.com/juju/docs/issues  will help us target those areas that aren't clear and expand on the information there.  Specifically if you could call out missing concepts, verbiage, etc. that would have helped that would be tops.
<Spaulding> lazyPower: sure, will do!
<lazyPower> thanks Spaulding :)
<stokachu> rick_h_: whats the status on https://bugs.launchpad.net/juju-release-tools/+bug/1631038
<mup> Bug #1631038: Need /etc/sysctl.d/10-juju.conf <juju-release-tools:Triaged by torbaumann> <https://launchpad.net/bugs/1631038>
<stokachu> i think we're hitting this with our single system deployment of openstack
<Siva> I have deployed an application and I find that the 'juju status' shows the machine in pending state, though the MAAS UI says the machine is 'Deployed'
<kwmonroe> cory_fu: i've been thinking about the bigtop hdfs smoke-test failing with < 3 units.  without a dfs-peer relation, can i detect how many peers a datanode might have?  if not, i'm thinking of running a dfsadmin report to get a count of live datanodes.  thoughts?
<Siva> It is in pending state for 15 mts now
<Siva> Is there a log file I can look at to debug and see what is happening?
<cory_fu> kwmonroe: I don't think there's a way to count the units w/o a peer relation, though it would be trivial to add one.  But dfsadmin seems reasonable, too
<kwmonroe> ack cory_fu, dfsadmin is trivialier for me to add ;)
<Siva> Any help is much appreciated
<zeestrat> Randlema1: You hitting something like this with Nagios and the NRPE charm? https://bugs.launchpad.net/charms/+source/nagios/+bug/1605733
<mup> Bug #1605733: Nagios charm does not add default host checks to nagios <family> <nagios> <nrpe> <unknown> <nagios (Juju Charms Collection):New> <https://launchpad.net/bugs/1605733>
<Siva> I am seeing this status for 1 hr
<Siva> 1         started  192.168.1.252  4y3hkf                trusty  default 4         pending  192.168.1.29   4y3hkg                trusty  default
<Siva> which log file should I look at to see why the machine is not moving to 'started' state?
<kwmonroe> Siva: can you ssh to the pending unit?  either "juju ssh 4" or "ssh ubuntu@192.168.1.29"?
<kwmonroe> Siva: also, does "juju status 4 --format=yaml" tell you more about why the machine is pending?
<Siva> Nope. I am not able to ssh into it
<Siva> I see on teh machine console that curl call to juju-controller to connect on port 17070 is failing
<Siva> I see on the machine console that curl call to juju-bootstrap node to connect on port 17070 is failing
<Siva> Here is the output
<Siva> http://pastebin.ubuntu.com/23309745/
<kwmonroe> Siva, is there a way to get onto machine 4 from maas?  i'm curious what your /var/log/cloud-init* logs look like
<rick_h_> stokachu: it's setup to be in GA to get the extra config so you get a couple more LXD ootb, but it's not a big swing
<rick_h_> stokachu: to get the big changes you need a logout/back in or reboot and we can't do it ootb
<stokachu> rick_h_: gotcha
<stokachu> rick_h_: ill document this on our side
<rick_h_> stokachu: yea, we now show the link to scaling lxd wiki page on bootstrap for lxd because of it
<stokachu> rick_h_: you have that link so i can keep the message the same?
<rick_h_> stokachu: https://github.com/lxc/lxd/blob/master/doc/production-setup.md
<stokachu> thanks
<bdx> hows it going everyone?
<bdx> is there a way to specify what subnet I bootstrap to?
<rick_h_> bdx: sorry, a couple of folks that would know were on holiday last week. I'm sending an email right now to find out and will get back to you tomorrow
<rick_h_> bdx: they're in EU end EOD atm
<rick_h_> but will be back in the morning
<rick_h_> bdx: this is on AMZ correct?
<rick_h_> bdx: or something else?
<bdx> rick_h_: thats great, thanks
<bdx> rick_h_: yea, aws
<rick_h_> bdx: and this is not going to work with the vpc-id constraint?
<rick_h_> bdx: because there's > 1 subnet or any other details I can pull out?
<bdx> rick_h_: exactly ... I have 50+ and growing subnets
<rick_h_> bdx: k, will see.
<bdx> rick_h_: thx
<katco> someone here was asking about lxd and zfs?
<rick_h_> katco: sorry, the canonical one
<katco> rick_h_: oh... i cannot atm :( my server is down
<katco> rick_h_: hd went out
<rick_h_> katco: ok, but can you join from the current client?
<katco> rick_h_: i don't have any certs or anything
<katco> rick_h_: i can try and get that set up..
<ahasenack> katco: hi, I was just wondering how juju calls lxd init
<ahasenack> if it requests zfs or not
<ahasenack> juju 2 rc3 specifically
<ahasenack> on xenial
<katco> ahasenack: try running bootstrap with --debug; it should provide some information about the rest calls
<ahasenack> katco: this is the maas provider, where I did a deploy --to lxd:0
<ahasenack> it's that lxd
<katco> ahasenack: ah
<ahasenack> katco: I'm seeing abysmal i/o performance inside that container, and I checked and saw that the host has the lxd containers backed by one big zfs image file
<ahasenack> I haven't seen this before, and I can't tell if it's new
<katco> ahasenack: so you're wondering if it requests zfs by default?
<ahasenack> yes
<ahasenack> if not, there are other clues I can chase
<ahasenack> like, if zfsutils is installed, then lxd will pick zfs by default, I'm told
<katco> ahasenack: it certainly looks like if series == "xenial" it's going to initialize a zfs pool
<ahasenack> hmmm
<katco> ahasenack: trying to figure out where that gets used...
<ahasenack> katco: does it create the pool beforehand, file-backed?
<ahasenack> I got a 100G pool
<ahasenack> in the host
<katco> ahasenack: yeah: https://github.com/juju/juju/blob/78273ef59ee77c0be55f761346917cfe63842dcd/container/lxd/initialisation_linux.go#L136
<katco> ahasenack: ah, so it looks like it's telling lxd to use zfs from the get-go... juju doesn't do anything else after that
<katco> ahasenack: all created containers will allocate to that pool backed by zfs
<ahasenack> 90%
<katco> ahasenack: alarming, but at least it's sparse
<katco> ajmitch: i don't know where that magic number came from
<ahasenack> katco: I think lxd caps it at 100G
<katco> ajmitch: oops sorry for misping
<ahasenack> katco: ok, so if xenial, that happens. Else, it's just "lxd init"?
<katco> ahasenack: i believe it will just use the presumably running lxd daemon. init in this case is just to initialize a storage pool
<ahasenack> got it, thanks
<katco> ahasenack: hth
<vmorris> bdx rick_h_: this really needs to be updated https://jujucharms.com/docs/stable/network-spaces
<vmorris> I guess you can only work with spaces if MAAS is the undercloud, but in juju 2 it doesn't seem that spaces are configurable directly
<jgriffiths> Is there any way to use a local juju controller (lxd) to deploy bundles to remote MAAS system? It seems a waste to spin up and entire server just to act be a controller.
<jgriffiths> Sorry if that is a dumb question. Just looking at maas/juju for the first time and need to know if I need an extra piece of physical hardware for the juju controller.
<vmorris> jgriffiths: i asked a similar question but yeah, you've gotta have a whole dedicated machine in your MAAS cluster for the juju controller
<vmorris> jgriffiths: i also find it silly, but currently running MAAS all with KVM VMs, so I can get away with a tiny VM for the controller
<jgriffiths> vmorris: Thanks! I've been looking around the internet for a couple hours trying to find out how to do it before I started thinking that it wasn't even possible. It's a huge waste in a bare metal environment. So, it looks like I need a physical MAAS server, a physical controller, and all the nodes for Openstack.
<jgriffiths> And a grammar teacher.
<vmorris> jgriffiths: yeah if you're looking to run the openstack-base bundle, it's really 5 machines :P
<vmorris> plus the maas controller, yep
<vmorris> jgriffiths: my current configuration has maas-controller and 4 maas machines all as KVM guests on the same physical host
<vmorris> sorry, 5 maas machines
<vmorris> jgriffiths: it's stable, but complex.. I've been exploring the openstack-on-lxd approach for about a week, and it's not quite as stable in my experience
<vmorris> i still like the pure lxd approach though
<jgriffiths> One last stupid question then. Do I need a controller if I'm manually deploying the individual charms? And now that you mention it, is anybody making an openstack-base style bundle without any containers at all (apt-get everything)?
<jgriffiths> I'm still learning all this and have a lot of holes in my knowledge of charms.
<vmorris> jgriffiths: I'm not aware of any way to deploy a charm without a controller
<vmorris> jgriffiths: I'm also unaware of anyone at canonical doing anything to deploy openstack from a vendor perspective outside of juju
<vmorris> but i am not an expert in the matter!
<jgriffiths> Thank you very much vmorris!
<vmorris> sure :_)
<spaok> jgriffiths: you can deploy the openstack bundle to whatever
<spaok> containers are just easy low overhead servers
<vmorris> spaok: not without a juju controller bootstrapped
<vmorris> that was the first q
<spaok> vmorris: correc
<spaok> I was talking about the container part
<vmorris> ah ok, but still they're going to end up running services in containers on the machines
<vmorris> right?
<spaok> not really, you can use KVM
<spaok> we don't run containers for our vm's yet
<vmorris> ah yeah, that's right re: KVM
<vmorris> do you have a link to that? I saw the bug report https://bugs.launchpad.net/juju/+bug/1547665
<mup> Bug #1547665: juju 2.0 no longer supports KVM for local provider <2.0-count> <juju:Triaged> <https://launchpad.net/bugs/1547665>
<jgriffiths> I was referencing the "openstack-on-lxd approach" not being stable concept and thought vmorris was suggesting not using containers for the components.
<vmorris> spaok: or are you just talking about juju 1?
<vmorris> jgriffiths: it's just not stable for me, i've heard it works.. just hasn't been my experience yet
<jgriffiths> I was referencing the "openstack-on-lxd approach" not being stable comment and thought vmorris was suggesting not using containers for the components.
<spaok> I'm saying KVM as the compute type for openstack
<spaok> as for the bundle part, I would target physical servers
<spaok> in MAAS or something
<jgriffiths> Oh. So you're not running any services inside containers?
<vmorris> spaok: alright, then we're talking about different things
<spaok> vmorris: ya, there's the compute VM's and the OpenStack services, the later can be deployed to physical servers, LXD containers, or even KVM (maybe that bug you referenced)
<spaok> I was saying you can build on all physical servers with KVM type compute nodes and have zero containers
<vmorris> spaok: using the openstack-base bundle?
<spaok> ya, you just modify it slightly
<spaok> specify machines and make them the targets for the services
<vmorris> spaok: yep alright, good point there
<spaok> though I would say its a waste of hardware
<spaok> and not how we are building our new production systems
<spaok> we us containers
<spaok> s/us/use/
<vmorris> yep
<spaok> jgriffiths: I think containers for openstack services is pretty stable, the questionable one is LXD as the compute backend
<spaok> but that does work too
<spaok> which is the bundle you referred to
<jgriffiths> That's what I thought. It seemed pretty stable. Thanks spaok
<spaok> the new HA stuff using DNS is pretty good too , we had a lot issues with the hacluster charm way of doing it
<spaok> but our setup is fairly off the beaten path also
<vmorris> spaok: do you have any published architectures or white papers?
<spaok> no not really, our original design was based on https://github.com/wolsen/ods-tokyo-demo/blob/master/ods-tokyo.yaml
<spaok> that we added a bunch of overlay to and some other stuff
<spaok> the new "2.0" version of our stuff we stripped most of that out
<vmorris> cool, thanks
<Juju3> Fgffd
<Juju3> ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDHXecAJkgNgxXpFfoSV+RwB2JwSoRASTUboa6FaAYU8XkCCQlx1K+jlvjXJY5ItDzmJRfQ1Q+07X7RfPuE4Ditjy9g8jwpdAnA4IYTN5b9QYkdxwbZPY7Jrsw8eYbnWsBWTDo3CKjxZyeglUq/cue8w0Rjw7FKwcIa4PvLMZo8V/H+nD30Y0MRtY1p2NYFUfZvdyNiIeB65k2ONiPf66NVa7Ywm63oShLztmyTY2Oy3VY5BYDCFatv3/PjagWyICdPrvWH2SdTK4Zo8c8jVyr4cA3JsEG2S11s4sKFFNKXA+epqQclyrD2CpcfG8uxHCdirQGThY0d1iUa6nf9pIaz lin
<smgoller> maybe I should start here instead. :)
<smgoller> so I've deployed openstack via the charm, and I'm working on increasing the size of the cluster. Now, the readme says to scale out neutron to do "juju add-unit neutron-gateway", but I only need neutron-openvswitch on the additional nodes. but if I try to add-unit neutron-openvswitch it complains about it being a sub-charm or somesuch. Any ideas?  In the initial deployment, neutron-gateway only ends up on 1 machine instead of all 4.
#juju 2016-10-12
<junaidali> Hi everyone, the get_hostname function in charmhelpers charmhelpers/contrib/network/ip.py is returning the hostname as "eth4.juju-5a02c3-4-lxd-18.maas" instead of "juju-5a02c3-4-lxd-18.maas"
<junaidali> where eth4 is the interface of ip that is provided to get_hostname
<junaidali> rabbitmq charm is failing on config-changed hook as the above mentioned issues writes wrong hostna/etc/rabbitmq/rabbitmq-env.confme in
<junaidali> sorry for the typo
<junaidali> rabbitmq charm is failing on config-changed hook as the above mentioned issues writes wrong hostname  in /etc/rabbitmq/rabbitmq-env.con
<junaidali> anyone faced this issue?
<Spaulding> gday!
<Spaulding> is there any easy way to copy file to juju charm? or basically i need to include it into hooks/reactive?
<Spaulding> there is render() for templates... should be enough for me.
<magicaltrout> Spaulding: whats its purpose?
<magicaltrout> you can just include stuff in the charm package for templates and stuff
<magicaltrout> or you have resources for installable packages etc
<junaidali> the interface appended hostname that i mentioned earlier is a new feature in MAAS 2.0 but rabbitmq and many other charms are relying on reverse dns lookups to determine hostname https://bugs.launchpad.net/maas/+bug/1584569.
<mup> Bug #1584569: maas 2 adds multiple DNS entries for nodes <canonical-bootstack> <MAAS:Won't Fix> <https://launchpad.net/bugs/1584569>
<junaidali> this issue seems to be pretty critical for the charms, are there any workarounds available?
<magicaltrout> probably junaidali but the openstackers in this channel appear to be sleeping
<magicaltrout> i'm not sure if jamespage can point you to someone who can help you
<jamespage> junaidali, magicaltrout: this is a tricky one, which we 'fixed' and then 'unfixed' as it completely made juju 1.25.x unreliable
<magicaltrout> lol
<jamespage> the key problem here is juju makes no guarnatee to charm authors about resolvability of the local hostname to something other than 127.0.0.1
<jamespage> so on different providers, you get different behaviour
<jamespage> RabbitMQ kinda fails to understand IP addresses
<jamespage> so we have to use hostnames - the ip helper attempts to resolve the local IP to a real hostname - under MAAS 1.9 this works ok (a reverse-lookup of IP == single hostname)
<jamespage> but for 2.0 it results in two matches
<junaidali> thanks jamespage, completely understands now. is it reliable to get the second last occurrence of a hostname splitted on '.' instead of getting the first one?
<jamespage> junaidali, hmm
<Spaulding> magicaltrout: i just want to include some of my config files
<magicaltrout> ah right Spaulding anything you put into the charm directory will be shipped when you run charm build
<magicaltrout> but you have a few options as well depending how they work, you could include them as resources still, that way people can overload them
<magicaltrout> similarly, you could keep the config options in config.yaml
<magicaltrout> and have users set them then populate a template when the install hook runs
<anrah> I have problem with juju 2.0 and ipv6 addresses, on 1.25 when prefer_ipv6 was true and querying it on hookenv.unit_private_ip method it returned ipv6 address
<anrah> now with juju 2.0 it returns ipv4 address and that is a problem as some services I am deploying bind only to ipv6 address
<Spaulding> magicaltrout: i figured out different approach
<Spaulding> imo the simplest one
<Spaulding> I'll tar all of files that I need
<Spaulding> and I'll just fetch them and untar them
<Spaulding> so I should have most of the files in the right place
<magicaltrout> Spaulding: by fetch you mean fetch over the web?
<magicaltrout> in which case, thats basically what Resources were implemented to replace, for offline deployments behind firewalls etc
<petevg> kwmonroe: left a comment on https://github.com/apache/bigtop/pull/148. I get an error when deploying bundle-local.yaml :-/
<Spaulding> magicaltrout: yes
<magicaltrout> Spaulding: i'd consider using resources then imho
<magicaltrout> also you can version resources etc which keeps stuff working as charm versions change
<Spaulding> magicaltrout: ok, I'll try to check that
<cory_fu> Has anyone taken a look at the charm testing libraries that free put up on the list yesterday?  bcsaller_, petevg?
<petevg> cory_fu: I'm behind on email right now. Looking ...
<petevg> cory_fu: which list was it? (I may be missing a subscription ...)
<cory_fu> petevg: juju-dev.  I'm going to request that it be cross-posted to the main juju list
<aisrael> Anyone have ideas on how to debug Juju 2 rc3 when it lxd allocation hangs forever? Some machines launched fine, and retry-provisioning the machine doesn't do anything.
<lazyPower> aisrael only lxd images launched via juju? assuming lxd launch ubuntu works as expected?
<aisrael> lazyPower, Correct. I deployed a bundle with four services. Two deployed fine. The other two didn't, and those machines aren't created in lxd either (so it's not cloud-init)
<lazyPower> series were the same across the board?
<aisrael> Ohh. You might be on to something.
<aisrael> Two trusty, two xenial.
<aisrael> Neither trusty machine has launched
<lazyPower> i think there's a bug on this
 * lazyPower digs
<aisrael> ding ding ding
<aisrael> error: Failed to clone the filesystem
<aisrael> trying to launch a trusty image via lxd
<lazyPower> happy to help add some context :) i cant find the bug
<aisrael> Much appreciated. :) I was testing lxd via snap earlier but had to roll back, so that might be related.
<jgriffiths> Has anybody seen the openstack-base bundle hang with four bare-metal (MAAS) servers (mysql, ceph-radosgww, and rabbitmq-server are "waiting for machine")? I know this isn't much information, but I'm guessing this is a common problem with a simple solution
<rick_h_> jgriffiths: try a juju status --format=yaml and see if there's better info in the machine section
<jgriffiths> Thanks. That gives much better information than a 'juju status' it seems. "Creating container: failed to ensure LXD image: image not imported!'" and "agent not communicating with server"
<jgriffiths> It's only the one server. I'm pretty sure I've put those services on a different machine, but I will rebuilt the bundle with different server constraints to see if anything changes.
<aisrael> lazyPower, turned out to be a bad image in lxd. I purged 14.04 and re-launched to grab a fresh one.
<lazyPower> aisrael nice. glad it was a localized issue and not something bigger. This explains why I couldn't find a bug :D
<aisrael> lazyPower, yup. Although, if I could repo it, it'd be nice to handle it a little better.
<hallyn> is there a best way to have a juju install charm copy a local directory into the instance?
 * hallyn pings rockstar bc why not :)
<lazyPower> hallyn the only thing that springs to mind is to package up that local directory, and deliver it as a resource.  Another option would be to use a devicemount if you're on lxd. but thats out of band of juju/charms, and more of a manual exercise.
<hallyn> was hoping for an automated 'juju scp' kind of thing
<hallyn> triggered from a charm
<lazyPower> i'm not sure how your charm would inform your laptop to scp that over without running some kind of daemon
<hallyn> i'm not either
<lazyPower> i think cory_fu landed some work on dhx, and it has capabilities of attaching stuff to a charm when you enter debug-hooks
<lazyPower> which iirc, just went 2.0 compat a couple weeks ago
<lazyPower> https://github.com/juju/plugins/blob/master/juju-dhx -- according to the commit history it went 2.0 back in june.   its unofficial so ymmv - known docs for the plugin are here: https://jujucharms.com/docs/1.24/authors-hook-debug-dhx
<hallyn> ok resources should work for me i think - thx
 * hallyn looks at the plugin real quick
<cory_fu> lazyPower: PR is still pending: https://github.com/juju/plugins/pull/70
<lazyPower> cory_fu landed
<cory_fu> lazyPower: :)
<kwmonroe> cory_fu: petevg: we're still busted on lxd (i think this is what pete saw last week):  report: java.net.UnknownHostException: juju-c82a97-0.localdomain
<kwmonroe> interestingly...
<kwmonroe> ubuntu@juju-c82a97-1:~$ host juju-c82a97-0.localdomain                                                                                                                           âjuju-info        slave                 ganglia-node          subordinate
<kwmonroe> Host juju-c82a97-0.localdomain not found: 3(NXDOMAIN)                                                                                                                            âjuju-info        slave                 rsyslog-forwarder-ha  subordinate
<kwmonroe> ubuntu@juju-c82a97-1:~$ host juju-c82a97-0                                                                                                                                       â
<kwmonroe> juju-c82a97-0.lxd has address 10.44.139.23                                                                                                                                       â
<kwmonroe> whups.. i meant:
<kwmonroe> ubuntu@juju-c82a97-1:~$ host juju-c82a97-0.localdomain
<kwmonroe> Host juju-c82a97-0.localdomain not found: 3(NXDOMAIN)
<kwmonroe> ubuntu@juju-c82a97-1:~$ host juju-c82a97-0
<kwmonroe> juju-c82a97-0.lxd has address 10.44.139.23
<kwmonroe> anyway cory_fu petevg.. i wonder if we can just tweak get_fqdn to use facter hostname instead of facter fqdn:  https://github.com/juju-solutions/layer-apache-bigtop-base/blob/master/lib/charms/layer/apache_bigtop_base.py#L479
<kwmonroe> the short names seem to be resolvable in all our clouds.  the alternative would be to make .localdomain resolve throughout lxd containers.
<petevg> kwmonroe: I think that I'm +1 to refactoring get_fqdn; I don't think that, in the places that we use it, we actually need the fqdn. (We should change the name, though.)
<kwmonroe> hey frobware, should lxd fqdns be .localdomain or .lxd?
<kwmonroe> frobware: i ask because i can't ping one lxd container from another when i do "ping <other>.localdomain", but i can when i do "ping <other>.lxd".  i'm wondering if it's related to https://bugs.launchpad.net/juju/+bug/1623480
<mup> Bug #1623480: Cannot resolve own hostname in LXD container <lxd> <network> <juju:Fix Released by dooferlad> <https://launchpad.net/bugs/1623480>
<cory_fu> petevg: Comments made on https://github.com/juju-solutions/matrix/pull/1
<petevg> cory_fu: grazie. (Don't forget that we have that benefits meeting thingy right now.)
<cory_fu> petevg: I'm in it.  :)
<bdx> hey whats up everyone?
<lazyPowe_> yo yo bdx
<bdx> can someone help me explain what my user is experiencing here -> https://s13.postimg.org/ffzttj0if/Screen_Shot_2016_10_12_at_2_29_13_PM.png
<bdx> I wasn't aware there was a localhost/localhost provider ...
<lazyPower> errr
<bdx> lazyPowe_: perfect
<bdx> just the guy I was looking for :-)
<lazyPower> thats got to be a manual cloud
<lazyPower> or something similar custom named
<bdx> lazyPower: I thought the cloud name was distinctly defined and displayed similarly though. Its the controller and that we have the capability of customizing though right .... trying to get more info from him now ..
<lazyPower> cory_fu - did we land the excludes: key in layer.yaml?
<bdx> lazyPower: what have you been deploying the kub stack on?
<lazyPower> bdx: manual provider, gce, aws, azure, openstack
<bdx> ahh ... so his reply "donât know if is using lxdâ¦. yes its bootstraped to localhost"
<bdx> I'm thinking I'll point him at aws for now
<lazyPower> its not compatible enough with LXD to be fully containerized. We've actually just completed an initial round of that work if you're interested in the results: https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/70
<lazyPower> the yield was: "Not now, but we'll work towards making this a thing"
<cory_fu> lazyPower: I don't think it's been released yet
<lazyPower> cory_fu: ok, that makes sense then. I was looking through open PR's and the commit history and wasn't able to pin it down
<lazyPower> btw, +1'd 2 of your pr's
<lazyPower> thanks for cleaning up the debug output and fixing layer-names vs path-names
<lazyPower> <3 'preciate you
<hallyn> I'm using kvm local provider - how can I set the default memory limit higher?  i assume environments.yml
<hallyn> but i'm failing to find documentation on the available keys
 * hallyn bets rharper or jamespage knows
#juju 2016-10-13
<lazyPower> hallyn - environments.yaml is a juju 1.x convention
<lazyPower> i'm assuming you're using 1.25.6 after re-reading that statement.... as kvm local provider only exists on the 1.x series of juju atm
<hallyn> heh, then i suppose it's a good thing i'm using that.  i was thinking of trying 2.0 as surely it must be better, but...
<hallyn> ok let's look through the source
<hallyn> though finding the src package can be a challange.  wow.
<hallyn> ugh, having the source doesn't always help :)
<hallyn> rogpeppe: hey!
<hallyn> oh here we go, maybe in ./src/github.com/juju/juju/container/kvm/kvm.go
<hallyn> *sigh*  https://juju.ubuntu.com/docs/reference-constraints.html redirecting to fluff is helpful
<hallyn> all right i guess i'll stick to manually doing set-constraints on every deploy
<pmatulis> hallyn, bonjour, where did you get that link?
<pmatulis> hallyn, try inserting /1.25/ or /2.0/ after /docs/
<rogpeppe> hallyn: hiya
<herb64> Hi all, does anybody know how to bootstrap juju controller into openstack cloud with self signed certificate?
<herb64> --debug tells me, that certificate has unknown authority. Searching for something similar as with using nova --insecure
<anrah> herb64: i have no problems using self signed certificates
<herb64> I'm using Juju 2.0 beta
<herb64> 2.0-beta15-xenial-amd64 exactly
<anrah> ok, i have rc2
<herb64> Good to hear, that it basically should work and that there's no general problem with it. I'll go for an update. Thank you
<anrah> Hmm, but now after looking my novarc file the keystone url is only http
<herb64> Well, but updating might be a good idea, anyway
<kjackal_> Good morning juju world!
<magicaltrout> god help us
<herb64> I now upgraded to juju 2.0 rc3 - but still the same, when bootstrapping into openstack
<herb64> auth fails, because "x509: certificate signed by unknown authority"
<herb64> any ideas, how to bootstrap into openstack with self signed certs, some flag similar to --insecure with nova?
<magicaltrout> herb64: dunno, kjackal_ appears to be awake though so might know someone who knows
<magicaltrout> or jamespage might be around and might have a clue if you've not spoken with him about it
<kjackal_> Hi magicaltrout herb64
<magicaltrout> i've not used openstack, but i'd imagine most certs are self signed aren't they?
<magicaltrout> considering how many people use it to test rather than in production
<magicaltrout> awww
<magicaltrout> as if
<magicaltrout> oh well
<magicaltrout> in other news.... it turns out my goldfish likes to be stroked......
<autonomouse> Hi, I don't know much about the non-reactive charms (or that much about reactive ones either, but I'm getting there) but I have a problem with getting a reactive charm to talk to it. The relation doesn't seem to be triggering anything on the non-reactive side. When I look in the hooks folder, I can see lots of symlinks in the hooks folder for the relations with other charms, with names such as "xxx-relation-joined" or "yyy-relation-changed".
<autonomouse> The one I'm trying to trigger is @hooks.hook('oildashboard-relation-joined') in hooks.py. Looking at the symlink files, they all just seem to be symlinks to the hooks.py - could this be the cause? Do I just make a new symlink called oildashboard-relation-joined? Seems a bit random...?
<hallyn> pmatulis: https://juju.ubuntu.com/docs/1.25/reference-constraints.html doesn't work either :(   link came from a blogpos iirc
<hallyn> but blaming the blog post would be wrong.  this is the internet.
<hallyn> rogpeppe: i was looking for someone who could point me to docs about the keys available in environments.yaml
<rogpeppe> hallyn: there is none. https://bugs.launchpad.net/juju/+bug/1628865
<mup> Bug #1628865: bootstrap command help does not document possible configuration values <juju:Triaged> <https://launchpad.net/bugs/1628865>
<rogpeppe> hallyn: unfortunately the source code seems to be the only reliable place and the relevant code is now all over the place since the key space has been split up
<hallyn> rogpeppe: d'oh
<hallyn> right, key space split up is what i ran into last night looking for the answer in the src :)  ok thx
<rogpeppe> hallyn: good places to look are: controller/config.go environs/bootstrap/config.go environs/config/config.go
<rogpeppe> hallyn: there are probably others that i don't know about
<hallyn> rogpeppe: i was assuming there was some structure, i.e. "container: kvm\nconstraints.mem: 2G"
<rogpeppe> hallyn: what version of juju are you using?
<hallyn> 1.25.6-xenial-amd64
<vmorris> Should I expect a unit that's error/idle with a failed update-status hook to ignore any attempts to remove it?
<vmorris> even with a --force switch, unit-remove seems to have zero affect on the hung unit
<icey> the 16.10 release announce mentions juju 2.0 GA, is that going out today? http://insights.ubuntu.com/2016/10/13/canonical-releases-ubuntu-16-10/
<lazyPower> vmorris - which substrate is this?
<vmorris> lazyPower lxd/local
<lazyPower> vmorris juju remove-machine # --force (assuming only one application/charm is deployed there) should remove that stuck unit
<lazyPower> thats kind of a big hammer approach to removing a stuck unit, but it does work.
<vmorris> yeah, i suppose that would be fine for lxd
<lazyPower> icey we can hope :D
<rick_h_> icey: yes
<marcoceppi> lazyPower: did you see the reply here? http://askubuntu.com/questions/835522/kubectl-cluster-info-get-502-bad-gateway-error
<lazyPower> marcoceppi did just now :( and it makes me sad
<lazyPower> default  lxd-test    localhost/localhost  2.0-rc3  -- we dont support lxd deployments yet
<marcoceppi> lazyPower: yeah, is there not a profile we can add to LXD to at least get it around
 * lazyPower will update the question and edit it to be approrpiate
<lazyPower> nope, lxd constraints will keep flannel from working so it'll never fully turn up
<lazyPower> you'd have to run the entire thing as priveldged containers and i'm not certain that works
<lazyPower> Cynerva - have we tried that? stuffing k8s in priv. lxd containers?
<Cynerva> lazyPower: tried master, haven't tried worker. With master in privileged LXD I got pretty far, but nginx-ingress-controller seemed to have trouble deploying. That may have been a sporadic issue though. I didn't look into it further.
<lazyPower> thats odd its jsut an nginx container + ssl certs
<lazyPower> but ok, we've taken a prelim look at it.
<lazyPower> marcoceppi i'm not sure what i can add here to the conversation. I could reasonably rewrite the question/answer to be mroe specific to the problem and go into technical detail about the different components and what we think needs to happen
<lazyPower> is that overkill?
<ahasenack> marcoceppi: hi, around? I got a very surprising update-status hook failure in the ubuntu charm
<ahasenack> it was running ok every 5min, and now it's failing in every run with ImportError: No module named 'charmhelpers'
<ahasenack> there was no code or charm upgrade triggered by me
<ahasenack> I filed #1633106
<mup> Bug #1633106: update-status hook failure: cannot import charmhelpers <kanban-cross-team> <landscape> <ubuntu (Juju Charms Collection):New> <https://launchpad.net/bugs/1633106>
<ahasenack> marcoceppi: n/m, forgot that this machine was mid release-upgrade :(
<arosales> rick_h_:  we continue to hit the lxd issues with rabbitmq
<arosales> rick_h_: specifically https://bugs.launchpad.net/charms/+source/rabbitmq-server/+bug/1563271 and https://bugs.launchpad.net/charms/+source/rabbitmq-server/+bug/1584902
<mup> Bug #1563271: update-status hook errors when unable to connect <landscape> <openstack> <rabbitmq-server (Juju Charms Collection):Confirmed> <https://launchpad.net/bugs/1563271>
<mup> Bug #1584902: Setting RabbitMQ NODENAME to non-FQDN breaks on MaaS 2.0 <backport-potential> <canonical-bootstack> <conjure> <cpec> <juju2> <maas2> <sts> <rabbitmq-server (Juju Charms Collection):New for james-page> <https://launchpad.net/bugs/1584902>
<arosales> rick_h_: I think vmorris is seeing this with lxd/local with openstack charms on s390x
<arosales> rick_h_: suggestion how we should proceed? In 1584902 we were unable to work around the issue in charms, and thought it may need to be resolved at the lxd or juju level, thus wanted to get your thoughts
<vmorris> yep -- adding calls to configure_nodename() in the update-status and amqp-relation-changed hooks seems to help
<rick_h_> arosales: looking
<rick_h_> dooferlad: ping, we should be in a place where all lxd containers have hostnames now as of rc3 right?
<dooferlad> I believe so
<rick_h_> dooferlad: created a card for the rabbitmq hostname issue there if you can please look into that as a next line of owrk
<rick_h_> dooferlad: looks like the hostname turns into ubuntu or something there
<arosales> vmorris: I think kwmonroe was looking at using the hostname in the big data charms to work around a similar issue
<arosales> *think*
<vmorris> rick_h_ dooferlad : the hostname is set to ubuntu at initial deploy, then is changed to juju-##### however the rmq-server configuration in /etc never gets updated
<hallyn> So - juju 2.0 does not have the local/kvm provider, what will be the proposed alternative?
<rick_h_> hallyn: lxd provider is the only alternative. Manual provider if you want to create kvm machines and add-machine them to a juju model
<kwmonroe> yeah rick_h_ arosales vmorris dooferlad, the hostnames are legit after the initial deployment (i think because the containers are rebooted).  unfortuantely, that doesn't help containers talk to each other.
<kwmonroe> http://paste.ubuntu.com/23318452/
<kwmonroe> seems like the lxd containers should have '.lxd' as their domainname instead of '.localdomain'.  dooferlad, does that sound right to you?
<dooferlad> kwmonroe: I don't know the specifics of if .localdomain, .lxd or something else is right.
<dooferlad> rick_h_, vmorris, kwmonroe: the hostname is set by Cloud Init - Juju just asks it to write the files. I didn't see hostname = ubuntu during my testing.
<dooferlad> none of this provides DNS though.
<rick_h_> kwmonroe: this is on the manual provider?
<vmorris> dooferlad et.al https://bugs.launchpad.net/charms/+source/rabbitmq-server/+bug/1563271/comments/5
<mup> Bug #1563271: update-status hook errors when unable to connect <landscape> <openstack> <rabbitmq-server (Juju Charms Collection):Confirmed> <https://launchpad.net/bugs/1563271>
<rick_h_> kwmonroe: vmorris that is ? ^
<rick_h_> e.g. that cloud-init might not be in play here?
<dooferlad> rick_h_: cloud-init only runs at the first boot.
<dooferlad> rick_h_: so no, that is nothing to do with that.
<rick_h_> dooferlad: right, my point is to see if an original hostname is set when the machine is created, and when it's added to juju with add-machine does it change/not change in some way that's causing an issue?
<dooferlad> rick_h_: OK, that won't have anything to do with cloud-init. I can't get to that today, but I will look at it first thing tomorrow.
<rick_h_> dooferlad: k
<kwmonroe> rick_h_: in my case, it's fine that 'ubuntu' is used when the machine is created so long as it has a real FQDN when the application is installed.  for me, it's a problem that 2 containers can't resolve the FQDNs in the same deployment.. so i opened https://bugs.launchpad.net/juju/+bug/1633126.
<mup> Bug #1633126: can't resolve lxd containers by fqdn <juju:New> <https://launchpad.net/bugs/1633126>
<dooferlad> rick_h_: (because Juju doesn't run until cloud-init has finished)
<arosales> rick_h_: I believe kwmonroe is using local/lxd on 2.0 and big data charms, not rabbit, but similar issues
<rick_h_> arosales: k, so the s390 will be different than local/lxd so trying to narrow down where we're looking atm.
<arosales> rick_h_: gotcha and in bigdata charm we see the issue everywhere, not just s390x, but also on x and p, aiui
<dooferlad> kwmonroe, arosales: do those LXDs have a DNS server that will resolve the Juju names? If not, how do we expect it to work?
<arosales> rick_h_: for s390x we are seeing https://bugs.launchpad.net/charms/+source/rabbitmq-server/+bug/1563271 and https://bugs.launchpad.net/juju/+bug/1632030
<mup> Bug #1563271: update-status hook errors when unable to connect <landscape> <openstack> <rabbitmq-server (Juju Charms Collection):Confirmed> <https://launchpad.net/bugs/1563271>
<mup> Bug #1632030: juju-db fails to start -- WiredTiger reports Input/output error <juju> <juju-db> <mongodb> <s390x> <juju:Incomplete> <Ubuntu on IBM z Systems:Incomplete> <https://launchpad.net/bugs/1632030>
<rick_h_> arosales: right, and the second one we've got eyes/work going into
<kwmonroe> sure dooferlad.. those LXDs use the lxdbr0 as their nameserver... and they can all resolve each other as "juju-foo-X.lxd".  just not "juju-foo-X.localdomain"
<rick_h_> arosales: so I'd like to take one issue at a time and ask dooferlad to look into hostname issues specifically wherever we're seeing those
<arosales> rick_h_: thanks, and the rabbit issue sounds related to the one the openstack folks opened, https://bugs.launchpad.net/charms/+source/rabbitmq-server/+bug/1584902
<mup> Bug #1584902: Setting RabbitMQ NODENAME to non-FQDN breaks on MaaS 2.0 <backport-potential> <canonical-bootstack> <conjure> <cpec> <juju2> <maas2> <sts> <rabbitmq-server (Juju Charms Collection):New for james-page> <https://launchpad.net/bugs/1584902>
<arosales> rick_h_: thanks and very reasonable approach.
<arosales> :-)
<dooferlad> kwmonroe: thanks - that certainly points to a fix!
<arosales> thanks to vmorris for testing and providing feedback
<vmorris> thanks arosales & all for the attention
<arosales> rick_h_: admcleod was also going to be setting up openstack on s390x, manual on lpar, and with lxd (I think) and see if he can reproduce https://bugs.launchpad.net/juju/+bug/1632030
<mup> Bug #1632030: juju-db fails to start -- WiredTiger reports Input/output error <juju> <juju-db> <mongodb> <s390x> <juju:Incomplete> <Ubuntu on IBM z Systems:Incomplete> <https://launchpad.net/bugs/1632030>
<arosales> vmorris: I think LXD + Ubuntu on LPAR is a really solid use case, so would like to make sure it is working smoothly
<kwmonroe> dooferlad: fwiw, this feels eerily familiar to https://bugs.launchpad.net/juju/+bug/1623480, but that bug was about a single container not being able to address itself.  it's like just a baby step farther to make sure multiple containers can address each other (by ensuring domainname = '.lxd').. i think :)
<mup> Bug #1623480: Cannot resolve own hostname in LXD container <lxd> <network> <juju:Fix Released by dooferlad> <https://launchpad.net/bugs/1623480>
<dooferlad> kwmonroe: agreed.
<hallyn> rick_h_: thanks.  too bad.
<vmorris> arosales: agreed, the platform is great for packing in containers
<beisner> hi arosales, rick_h_ - heads up.  ceilometer and aodh are also about to become more dns-sensitive, not due to changes in the charms, but changes in upstream code where they really really want sane A/PTR resolution all the way around.  this will be a growing theme, not specific to openstack /methinks.
<lazyPower> beisner yep, K8s has gone that way as well on our side of the infra wall. Nice to see we're converging around some of the same ideas in the deployment.
<rick_h_> beisner: right, we have a long term plan that we've tested works well that'll be coming soon
<beisner> fresh bug for reference:  https://bugs.launchpad.net/juju-core/+bug/1632909   slightly different set of tools and providers than the other bugs though, so new bug.
<mup> Bug #1632909: ceilometer-api fails to start on xenial-newton (maas + lxd) <maas-provider> <uosci> <OpenStack AODH Charm:New> <juju-core:New> <ceilometer (Juju Charms Collection):New> <https://launchpad.net/bugs/1632909>
<beisner> rick_h_ lazyPower arosales - ultimately if this new canary bundle fails, then all sorts of workloads should be expected to fail.  i'd go as far as to suggest TDD on juju-core, gated on this passing on providers:  http://bazaar.launchpad.net/~ost-maintainers/openstack-charm-testing/trunk/view/head:/bundles/other/magpie-lxc.yaml
<beisner> if that^ passes, so will rabbitmq, ceilometer, nova-cloud-controller and others who need to know names and numbers :-)
<rick_h_> beisner: that'd be good to send to tbauman and try to get into the stack of things they test there
<rick_h_> beisner: once we know it works, guessing it doesnt at all currentlY/
<rick_h_> ?
<beisner> rick_h_, yep we talked about it at the sprint and sinzui is planning something i believe.
<rick_h_> ok
<beisner> rick_h_, for ex., that is known to fail on the openstack provider and manual provider, since lxc units go to an island behind nat.  if you need a measure of fixing that, then this bundle is your baby.
<arosales> beisner: it feels like the problem is larger than rabbit, thanks for testing your bundle to confirm that with data.
<beisner> arosales, lazyPower - yep, glad we're either all crazy or right :-)
<lazyPower> beisner the one difference is k8s is shipping with its own dns provider to do the mapping. its not relying on env specific dns
<lazyPower> its kind of boggling how it all works, theres a lot of moving componentry there that can and might break.
<kwmonroe> can i specify a unit in the "to: X" bundle placement directive?
<arosales> Forgot to say happy Yakkety release day
<arosales> and OpenStack 16.10 charm release day
<kwmonroe> reading this:  https://jujucharms.com/docs/2.0/charms-bundles#bundle-placement-directives, it seems like i should be able to say "to: ubuntu/0" if an ubuntu was defined earlier.. but that doesn't work:
<kwmonroe> E: placement ubuntu/0 refers to non-existent service ubuntu/0
<kwmonroe> perhaps that documentation is *only* for lxd placement on a unit?
<beisner> woohoo yes arosales !
<rick_h_> kwmonroe: you have to target machiens not applications
<rick_h_> kwmonroe: we don't map the application/unit to the machine, it's to a new machine, new container on an existing machine, etc
<beisner> kwmonroe, it'd be --to "ubuntu=0" in juju-deployer speak.
<rick_h_> kwmonroe: I seriously think that doc section is a lie :/
<kwmonroe> beisner: say what?  what is this juju-deployer --to stuff you speak of?
<beisner> ha!  anyway, this has worked for us for all of time:  http://pastebin.ubuntu.com/23319065/
<kwmonroe> my stars beisner!  i'm fixin to send you an ecard if this works.
<rick_h_> beisner: ok, but only with the deployer though? or does that work with juju itself?
<beisner> rick_h_, i believe so.  juju dash deployer ! juju space deploy ;-)
<rick_h_> beisner: looking at the code it checks the directive is a machine
<kwmonroe> i think not beisner.. invalid placement syntax "ganglia=0"
<rick_h_> yea
<kwmonroe> using "juju deploy", not "juju-deployer"
<beisner> right.  so, rick_h_ as we continue to ramp up 2.0 in osci, you'll likely see a load of feature parity wishlist items from us.  such as this.
<rick_h_> beisner: I look forward to getting the liste of requests
<kwmonroe> so rick_h_, riddle me this.  my bundle defines 1 machine.  in juju1, that needs to be 'machine: 1' because machine 0 is taken by the bootstrap node.  in juju2, juju will create the machine as machine-0, so subsequent placement fails (there is no machine-1 in juju2).
<beisner> oh i've been trying to solve for that equation too.  /me stands by
<rick_h_> kwmonroe: so bundles all start at 0 and include only machines defined in the bundle
<kwmonroe> i'll whip up a proper bug report, but i think that's what's happeneing
<rick_h_> kwmonroe: so you're saying that bundle fails on the deployer side? I thought it was updated to accept that in the v4 format
<kwmonroe> rick_h_: if i have a bundle define machine 0 and deploy it on juju1, stuff gets colocated on my bootstrap node
<rick_h_> kwmonroe: so the machine number cannot and does not relate to a machine in your model, it's all new machines
<rick_h_> kwmonroe: not with the gui, I'd have to test deployer then
<beisner> rick_h_, that's exactly why addressing them by name is valuable ^
<kwmonroe> yeah rick_h_ -- probably a j-deployer thing.. like i said, i'll write it up more betterer.
<kwmonroe> +1 beisner
<rick_h_> beisner: understand, but async and order and ... so name is hard
<rick_h_> beisner: but yea, it's cool to get it as a request and look at it
<rick_h_> beisner: just saying that it's been this way for xxxxx months and to get it the day of release is a can of "sorry"
 * rick_h_ is having too much fun today, take all that with a giant :)
<beisner> ha! :-) no i'm not asking for that.
<beisner> native deploy ftw;  i've just got both paths to regression test for the lifetime of Trusty, so it's tricky to craft bundles in a way that we don't have to maintain two sets of bundles.
<kwmonroe> wait, we're not doing double digit RCs?
<rick_h_> beisner: right, but where's the bug on this as part of native deploy?
<beisner> back to arosales - happy Yakkety day!
<cory_fu> lazyPower, kjackal_, petevg, kwmonroe: Can I get a re-review on https://github.com/juju-solutions/layer-apache-kafka/pull/13
<petevg> cory_fu: +1 (I agree that it's okay to fail on a weirdly composed tarball.)
<bdx> rene4jazz: glad you made it!
<bdx> lazyPower: I want to introduce you to a colleague of mine, Rene Lopez (rene4jazz)
<lazyPower> Greetings rene4jazz o/
<bdx> lazyPower: Rene is interested in deploying your kub bundle, I thought I would put you guys in touch
<beisner> rick_h_, ha, had to dig deep as the one i had been tracking is invalid.  https://bugs.launchpad.net/juju-core/+bug/1583443
<mup> Bug #1583443: juju 2.0 doesn't support bundles with 'to: ["service=0"]' placement syntax <juju-core:Invalid> <https://launchpad.net/bugs/1583443>
<rick_h_> beisner: ah yea not on our radar looking at invalid bugs
<rene4jazz> Greetings lazyPower
<lazyPower> rene4jazz I'm happy to help get you started with Kubes. Do you have any initial questions?
<bdx> rene4jazz: lazyPower is the maintainer of the kub bundle, I wanted to introduce you two, so hopefully you can bounce ideas off eachother as you run through kub deploys
<lazyPower> co-maintainer*
<rene4jazz> thanks bdx
<bdx> mybad *^
<bdx> np
<lazyPower> mbruzek does a lot of the heavy lifting too :)  pedantic i know, but he deserves some credit too
<bdx> def
<bdx> mbruzek: props!
<lazyPower> he's out ot lunch and getting props, haha
<lazyPower> he's gonna be bummed he's missing it
<lazyPower> i'll relay though. So back to kubernetes
<lazyPower> rene4jazz - tell me a little bit about your wants/needs here. I have it on good authority you were pioneering on LXD - which is unfortunately not supported at this time for most of the charms.
<rene4jazz> lazyPower, I'm curious about the kub bundle and started to mess with it. My first approach was to try with the localhost (lxc based) provider
<lazyPower> yeah, we really need to move that warning up in the README... its buried at the bottom under the caveats
<rene4jazz> My goal is first deploy the bundle then start adding pods for several app related services
<lazyPower> Ok. Is localhost your primary option for exploration? The cost of using clouds being the prohibitive factor here....
<rene4jazz> lazyPower, correct... cost is a factor
<lazyPower> rene4jazz - so this limits options, but we can work with this. Are you familiar with KVM?
<rene4jazz> lazyPower, the Hypervisor?
<lazyPower> correct. What i'm going to propose is using the manual provider to enlist a few KVM vm's, and test there. We can trim the bundle down to only the K8s charms whcih will save you some effort in how many VM's to provision.
<bdx> lazyPower: what about developer.juju.solutions .. is that not a thing anymore?
<lazyPower> bdx - oh yeah! i forgot all about it
<lazyPower> bdx nice save
<rene4jazz> lazyPower, bundle size is fine, I can deal with the VM numbers
<kwmonroe> rick_h_: i'm game to make a juju doc PR.  to be clear, the placement directive section should talk about machine placement only, and those should be defined in a 0-based "machines:" section.  right?
<kwmonroe> rick_h_: and to be doubly clear, i'm talking about updating this page: https://jujucharms.com/docs/2.0/charms-bundles#bundle-placement-directives
<lazyPower> rene4jazz - alternatively, if you're up for it we can get you on the charm developer program which will give you some AWS runtime while you explore the bundle in its entirety, on aws.
<rick_h_> kwmonroe: yes
<kwmonroe> got it
<lazyPower> rene4jazz if you're interested in the charm developer program - head over to http://developer.juju.solutions and sign up for the CDP
<rene4jazz> lazyPower: great to know, right now will keep exploring local options, thanks for the help
<lazyPower> rene4jazz - OK. I'm here to lend a hand if you have any questions. feel free to ping
<kwmonroe> ah crap.. #thatfeelingwhen you mistype 'branch -d' instead of 'checkout -b' :/
<lazyPower> hope you pushed it remotely so you can re-fetch
<kwmonroe> dont' be silly lazyPower.  i'm too agile for all this "pushing" and "remote" nonsense
<kwmonroe> and to top it off, my time machine backup disk died 2 days ago, and of course i turned off mobilebackups because #dumb.
<lazyPower> Whoops
<kwmonroe> should have gotten a thinkpad
<lazyPower> my thinkpad has hw failure scrolling in dmesg :(
<kwmonroe> lol.. maybe a dell then.
<lazyPower> well i also purchased it like fifth or sixth hand
<lazyPower> assuming anyway
<vmorris> does anyone have experience using nginx or haproxy to reverse proxy access to juju applications?
<vmorris> link to a tutorial or some guidance would be appreciated!
<x58> Is james page in this channel?
<vmorris> x58: yeah
<x58> What's his nick?
<vmorris> jamespage
<x58> Nevermind, found it :P
<vmorris> ><
<x58> Just didn't tab enough. Too many james's
<jamespage> hello
<jamespage> vmorris, there is a haproxy charm that does that
<vmorris> jamespage thanks, i'm looking into it now
<jamespage> basically any juju app that provides an http interface can be loadbanlanced
<jamespage> vmorris, context?
<x58> jamespage: https://bugs.launchpad.net/charms/+source/rabbitmq-server/+bug/1584902 this being reverted is worrying to me...
<mup> Bug #1584902: Setting RabbitMQ NODENAME to non-FQDN breaks on MaaS 2.0 <backport-potential> <canonical-bootstack> <conjure> <cpec> <juju2> <maas2> <sts> <rabbitmq-server (Juju Charms Collection):New for james-page> <https://launchpad.net/bugs/1584902>
<jamespage> x58, well it broke alot more things than it fixed - what's your specific concern?
<x58> jamespage: It will break our deployment on 2.0, where it will use the wrong NODENAME and won't cluster/do anything.
<x58> The fix that went in was specifically to support deployments on 2.0, I understand that it apparently breaks 1.25.x but reverting it doesn't seem like the right thing to do either.
<jamespage> x58, no it still needs fixing
<jamespage> x58, tbh alot of this relates to the fact that we need a consistent, resolvable hostname based environment from juju
<x58> We have had 40+ deployments in our lab with 2.0 with the charm as it stands now, and everything clusters and works without issues.
<x58> Yeah...
<jamespage> x58, a charm should never have to mess with /etc/hosts
<jamespage> and that was the fix we tried
<jamespage> basically unless every rabbit unit had the same view as every other one, units failed to cluster
<x58> As I mentioned, currently it works for us. It may not be ideal, but it works. Reverting it will break it.
<x58> A better solution should be found...
<jamespage> x58, fix your charm version for the time being
<jamespage> x58, we will have a better fix for 2.0 users
<jamespage> if a particular version works now, continue to use that and don't upgrade
<x58> Ok. Will do.
<jamespage> x58, actually wait - which charm version are you using? I reverted this in the stable charm as well as the dev brnach
<x58> Let me check my bundle file.
<x58> prod: cs:~openstack-charmers-next/xenial/rabbitmq-server lab: rabbitmq-server
<x58> I just noticed the comment on the bug, we haven't had to redeploy lab yet, but I have a feeling that as soon as we grab the reverted version, we are going to hit the original bug.
<kwmonroe> rick_h_ beisner: if you'd like to double check the language, i merged the "machine specifications" and "bundle placement" sections into 1 as per our earlier discussion:  https://github.com/juju/docs/pull/1448
<kwmonroe> (that is, apps can specify machine placement, not service/X placement)
<x58> jamespage: ^^
<jamespage> x58, yeah if you deploy clustered I think you will
<x58> What was the previous version so we can pin that?
<vmorris> this is a dumb question, but how do i get the credentials for the percona db post-deploy?
<beisner> vmorris, i believe you have to set it via the charm config options
<vmorris> deployed via the openstack-base bundle, it wasn't set
<vmorris> looking at the juju config output...
<vmorris>   root-password:
<vmorris>     default: true
<vmorris> ?
<jamespage> x58, I think its https://jujucharms.com/rabbitmq-server/xenial/5
<jamespage> rabbitmq-server-5
<jamespage> vmorris, ok so there is a gotcha here - if you want to deploy multiple pxc units, you must provide via configuration
<jamespage> vmorris, I need to spend some time on pxc overhauling the bootstrap process and password management stuff to use leader election
<jamespage> it pre-dates juju providing anything helpful for generating passwords for clustered services from with a unit
<beisner> hi kwmonroe, rick_h_ i'd prefer to defer to someone on juju-core who is code-familiar with native deployer re: docs.
<jamespage> x58, just confirming that now - have a MAAS 2.0 /Juju 2.0 env I'm doing some other testing in
<jamespage> its nippy so spinning a few more contains is OK
<jamespage> x58, yup that version lgtm
<jamespage> http://paste.ubuntu.com/23320046/
<x58> Excellent. Will pin it.
<x58> Thanks jamespage!
<x58> Looking forward to seeing the issue fixed properly.
<x58> Is there something logging the channel to HTTP? Something like botbot.me would be awesome.
<x58> https://botbot.me/request/
<hallyn> so if i have a workload running in, say environment gce, can i juju switch amazon, start a differnet workload, and switch back and forth?  /me is afraid to try and ruin the currnet install :)
<freyes> x58, you can find the logs at https://irclogs.ubuntu.com/2016/10/13/%23juju.html
<x58> Excellent.
<x58> Would be nice to drop a link to that in the topic!
<x58> Doesn't have latest logs :-(
<x58> I spoke too soon ;-)
<vmorris> jamespages: thanks for that
<kwmonroe> hallyn: sure, you can switch back and forth between controllers/models.  the only think i would be wary of is doing something like "juju bootstrap foo" in a tmux session and then "juju bootstrap bar" in another.
<kwmonroe> hallyn: that said, once the controller has received your request and you're returned to a command prompt, you can switch to whatever your heart desiers.
<kwmonroe> *desires
<hallyn> kwmonroe: and switch back and forth?
<kwmonroe> sure hallyn.. juju doesn't forget when you switch to something else :)
<rick_h_> hallyn: kwmonroe and most commands taje a -m for a model or controller:model combo
<rick_h_> so you can status w/o a switch and such
<kwmonroe> hallyn: 'juju controllers' and 'juju models' are a lifesaver for me to remember where my stuff is deployed
<hallyn> kwmonroe: awesome!  thx :)
<kwmonroe> hallyn: let me know when you try it so i can sign off
<kwmonroe> (totally kiddin)
<hallyn> :)
<hallyn> drat, i guess models are purely a 2.0 thing
<rick_h_> hallyn: yes, whooe new world
<rick_h_> whole
<kwmonroe> hallyn: it's true models are new in 2.0, but you can still switch in juju 1.25.  i go from aws to azure all the time.
<kwmonroe> hallyn: again, the only thing i would be concerned about is if you did an operation (like bootstrapping one env) and switched before it was completed.
<kwmonroe> but you should really just go to juju2. like rick_h_ said, it's a whole new (better) world :)
<kwmonroe> go to yacketty while you're at it ;)
<pmatulis> how do i influence the contents of /etc/neutron/plugins/ml2/ml2_conf.ini ? i'm using a bundle to set up openstack
#juju 2016-10-14
<hallyn> kwmonroe: nope!  i need the local kvm provider
<hallyn> and yacketty would make it more likely that upstart would break things, so no to that too :)
<pmatulis> hallyn, s/juju.ubuntu.com/jujucharms.com/  :P
<pmatulis> https://jujucharms.com/docs/1.25/reference-constraints
<hallyn> pmatulis: wish i could remember which blog i found that in, but i was looking through a lot of them last night...
<hallyn> pmatulis: thanks
<hallyn> (i had seen that page, doesn't have the info i need, but glad it answers that mystery :)
<pmatulis> ;)
<pmatulis> hallyn, kindly open a bug if the documentation is lacking - https://github.com/juju/docs/issues/new
<hallyn> lolz - i see, i can't use local kvm provider anyway without systemd.  drat.
<hallyn> curses!
<lazyPower> hallyn what are you working on that you need juju 1.25 and upstart bidniss?
<hallyn> 1.25 for local kvm provider
<hallyn> upstart bc i want upstart
<lazyPower> ah ok.
<lazyPower> I didn't know if you were kicking tires or were building the next cloud for your mobile DC :)
<hallyn> nah in the cloud i leave system installed.  upstart is just for battery life on my laptop
 * hallyn wonders how much work it would be to implement a juju-2.0 version of libvirt-kvm (uvtool) provider
 * hallyn wonders whether he could finagle rbasak into writing it
<hallyn> you know, to avoid uvtool become OBSOLETE.  yeah, i'll goad him into it.  bound to work
<blahdeblah> hallyn: As in, something that would allow us to just point juju at a libvirt host and juju deploy VMs?  Because I totally want that.
<hallyn> yeah - which is basically how the local-kvm provider in 1.x works
<blahdeblah> There's a local KVM provider in 1.x?
<hallyn> except i suspect it's more complicatd than it needs to be
<hallyn> yup
<blahdeblah> How did I not know about this?
<blahdeblah> Where are my angry eyes?
<hallyn> you use local provider, and change type to kvm instead of container
<hallyn> one sec,
<hallyn> https://jujucharms.com/docs/1.25/config-KVM
<hallyn> no need for ppa though
<hallyn> just install juju-local and add that environments.yaml section
<hallyn> been beating on it for the last two days, pretty stable
<blahdeblah> hallyn: I think I love you.
<hallyn> <blush>
<hallyn> you'll love rbasak more when he writes a new uvtool based one for juju 2.0
<blahdeblah> hallyn: I most certainly owe you a six-pack of $PREFERRED_BEVERAGE
<hallyn> well gawsh, all i did was find it not write it, but i'm not turning down $PREFERRED_BEVERAGE :)
<blahdeblah> Now all I have to do is work out how to get on a sprint you're on.  You might be waiting for that sixer a while... ;-)
<hallyn> :)
<blahdeblah> And I most certainly will owe rbasak double if he does a juju 2 version. :-)
<hallyn> +1
<kjackal_> Good morning juju world!
<kjackal_> SaMnCo: I am almost there
<Spaulding> gday!
<rbasak> hallyn: AFAIK, it's a design decision for there not to be local KVM support in Juju 2. I'm told Juju still supports KVM via uvtool, just not in the local provider.
<SDBStefano> hi, I'm new to juju, I following the doc, and I tried to create a test charm using
<SDBStefano> 'charm create -t python mycharm'
<SDBStefano> but the lib dir was not created
<SDBStefano> any suggestions ?
<kjackal_> Hi SDBStefano havent tried this template before. Let me give it a try now
<herb64> Hi all, was here yesterday already... juju bootstrap on openstack with self signed certificate - cannot bootstrap, getting "signed by unknown authority" error with --debug
<herb64> searching for some option --insecure like in nova
<herb64> running rc3 version now
<kjackal_> herb64: have you asked on #openstack-charms ?
<herb64> hi, kjackal, no.. would this be the better place to ask?
<kjackal_> herb64: I believe so, since your question is rather specific
<kjackal_> SDBStefano: yeap it does not create a lib dir
<kjackal_> SDBStefano: Why did you expect such a directory? Did you read this somewhere?
<kjackal_> SDBStefano: this bug should be related: https://bugs.launchpad.net/charm-tools/+bug/1395560
<mup> Bug #1395560: "create -t python " does not install lib/charmhelpers <Juju Charm Tools:Triaged> <https://launchpad.net/bugs/1395560>
<SDBStefano> hi kjackal_, first thanks a lot for helping, yes here : http://pythonhosted.org/charmhelpers/getting-started.html
<SDBStefano> I'm using : charm version charm 2.2.0-0ubuntu1~ubuntu16.04.1~ppa2 charm-tools 2.1.4
<SDBStefano> so it seems to be a problem, not already solved, am I wrong ?
<kjackal_> yeap, you are right SDBStefano
<SDBStefano> is there any other way to get the python library (charm-helpers) ? as a tar or something else
<kjackal_> SDBStefano: what exactly do you need to do? There is the option to add it as dependency like we do here: https://github.com/juju-solutions/layer-basic/blob/master/wheelhouse.txt
<kjackal_> SDBStefano: if you build your charm using layers you can just use the basic layer and you should be done
<SDBStefano> so, are you saying that  I don't need the charm-helpers ?
<kjackal_> if not, you should be able to add a wheelhouse.txt in your charm and the charm build process should bring the dependency in
<kjackal_> SDBStefano: by using the basic layer you are importing the charmhelpers
<SDBStefano> so it seems that I have to proceed and see how to use  charm layers
<SDBStefano> ok, thanks, I'm going to try
<kjackal_> SDBStefano: layers is the recomended way. But if you do not want to use layers have a look at what the basic layer is doing
<kjackal_> SDBStefano: for example https://github.com/juju-solutions/layer-basic/blob/master/lib/charms/layer/basic.py#L36
<SDBStefano> I'm going to follow your suggestion and looking at how to use the layers
<kjackal_> SDBStefano: yeap that is the right way to go, you will get alot of functionality for free
<SDBStefano> thanks a lot. out of curiosity I'm in Italy, where are you ?
<kjackal_> Athens Greece
<kjackal_> SDBStefano: We are almost on the same timezone, ping me again if you get into trouble
<SDBStefano> kjackal - I was looking at https://jujucharms.com/docs/devel/authors-charm-building that talks about layers
<SDBStefano> but it describe bash scripts
<SDBStefano> but if I understood the good direction is having all python as the github you adviced
<SDBStefano> so is this doc https://jujucharms.com/docs/devel/developer-getting-started fine ?
<SDBStefano> or should I follow only the info within the github ?
<hallyn> rbasak: drat, if it's a design decision then i guess you can't get away with coding it (meaning i may have to).
<hallyn> rbasak: the claimed support is rubbish.  having to manually create a vm, register it with juju, then tell juju to send a workload to it means i may as well use ansible
<rbasak> hallyn: does KVM support work with the manual provider? If so, then you could probably hack something together for a single machine KVM use case.
<rbasak> eg. by putting either the client or the "machine" in a container.
<hallyn> right.  i was just trying to badger you into doing it :)
<rbasak> Then as far as Juju is concerned your manual provider machine is somewhere else, and you can create KVMs on it?
<hallyn> but really, using local kvm provider made my charm iteration way faster and cheaper, so this still isn't seeming like a step up for me.  but, i'll figure something out (or stick with 1.x)   - thx rbasak
<rts-sander> my upgrade-charm hook froze so I killed it manually via the PID
<rts-sander> now the state is in upgrading still
<rts-sander> how do I reset the state of the unit?
<kjackal_> SDBStefano: here is an example of layers in python https://jujucharms.com/docs/2.0/developer-layer-example
<kjackal_> SDBStefano: this give a good description of what layers are: https://jujucharms.com/docs/2.0/developer-layers
<rts-sander> looks like the hook has a timeout, it's now in an error state and I can recover it
<bildz> http://i.imgur.com/yonlG9F.png  I feel I'm almost there.  Can someone take a look and let me know what may be able to get me past this issue?
<bildz> http://pastebin.com/qBtDJsyd
<bildz> i have 6 machines, do I need 7?
<gQuigs> if I'm using the juju stable ppa on trusty.. how do I switch back to "juju 1"?
<gQuigs> the instructions on first launch don't work
<lazyPower> gQuigs i'm looking into this for you, 1 moment while i spin up a vanilla trusty image
<lazyPower> gQuigs looks like the meta package has updated to 2.0, i'm not seeing a 1.x package anymore
<gQuigs> so I guess the help text could say, if you want juju1.25 remove juju and the PPA and install it from trusty..
<lazyPower> gQuigs: that does appear to be the case. let me nuke this image and try again without adding the ppa
<lazyPower> gQuigs - that does indeed install 1.25, i also see a juju-core in this ppa enabled image too... that may be the 1.25 we're looking for
<lazyPower> gQuigs with that ppa enabled, if you install juju-core, you will get 1.25
<gQuigs> lazyPower: intersting.. yup, I had to remove juju-2.0 juju juju-core
<gQuigs> and then reinstall juju-core and I'm back to 1.25
<gQuigs> lazyPower: thanks!
<lazyPower> gQuigs at one time you could update-alternatives, i'm not certain thats still the case
<lazyPower> but you can, and should be able to use juju 1.x and juju 2.x side by side on the same system. I know that marcoceppi does this, and he may have some insights on how you can do this that we should capture and put in the docs.
<gQuigs> should there be a PPA specific helptext?
<gQuigs> yea, I'm happy having separate machines for 1.x vs 2.x
<lazyPower> gQuigs : what would the help text say?
<gQuigs> lazyPower: currently it says this: http://pastebin.ubuntu.com/23324079/
<lazyPower> ahhh yeah thats stale
<lazyPower> thank you for capturing that. I'll file a bug
<gQuigs> so I'd say, We've detected you are using the PPA, please remove juju juju-2.0 juju-core and reinstall just juju-core, or whatever it needs to make it work
<lazyPower> gQuigs as you've got the primary information on how you arrived from A-Z would yo mind terribly filing the bug? https://bugs.launchpad.net/juju/+filebug
<lazyPower> if they route through me, i wont have all the details you've got, so its better if you take stakeholder on the bug
<gQuigs> lazyPower: I was about to say I can file it :)
<lazyPower> perfect :) Thanks for tracking this down.
<pmatulis> lazyPower, gQuigs: please open a docs issue with the relevant info to have a docs change - https://github.com/juju/docs/issues/new
<gQuigs> pmatulis: oh is that a docs bug?
<gQuigs> the help text in juju?
<lazyPower> pmatulis you bet, i'll xref w/ the core bug.
<lazyPower> that way if any action needs to be taken on either side we have a work order for it
<gQuigs> reported - https://bugs.launchpad.net/juju/+bug/1633542
<mup> Bug #1633542: Juju 2.x upgrade [ppa] doesn't show a workable way to downgrade to 1.25 <juju:New> <https://launchpad.net/bugs/1633542>
<gQuigs> should I report to gh to?
<lazyPower> gQuigs - Certainly, xref that core bug as I imagine the steps to fix will be posted there and pmatulis or i can circle back afterwords and cleanup the docs
<gQuigs> do I need more permissions to link them?  (I can ling ubuntu bugs to other trackers...)
<gQuigs> gh is https://github.com/juju/docs/issues/1470
<lazyPower> thanks for the bugs gQuigs,   just putting in the link is enough.
<hallyn> if i have an empty .jenv file, preventing bootstrap, how should i re-init a new one?
<hallyn> (if i just delete it, then juju bootstrap panics)
<hallyn> oops, nm, had to s/_/-/ in google-provided creds for juju
<hallyn> (which juju should probably know about and just dtrt)
<vance> can i specify which machine in MAAS the juju controller will deploy to?
<jhobbs> vance, with juju 2, when you're bootstrapping you can use --to <maas machine name>
<vmorris> jhobbs: thanks, i was just reading that in the docs. appreciate it
<cory_fu> tvansteenburgh, bcsaller, petevg: Reviews welcome on my two PRs on theblues: https://github.com/juju/theblues/pull/46 and https://github.com/juju/theblues/pull/47
<petevg> Cool. Taking a look ...
<cory_fu> petevg, bcsaller: Also, tvansteenburgh talked me into cowboying it in, but my changes to support bundle deploys in libjuju is at https://github.com/juju-solutions/python-libjuju/commit/fd2a74a2eba716e7ea298fe420f070221ff4581f
<petevg> bcsaller, cory_fu: I'm realizing that there's a good reason for the glitch tests to run outside of the matrix right now: I'm manually spinning up the model in the tests, and attaching it to a mock object that I'm using as a context. There's more integration work to be done outside of glitch before the glitch tests can just run things in the matrix.
<petevg> It's relatively small integration work, but I think that I'd like to ultimately keep it in a separate commit -- that PR is already huge.
<petevg> What say you two to giving a +1 on merging the glitch branch, and then I'll create a new branch for integration?
<bcsaller> petevg: let me do a quick pass over it, but yeah, that will be fine
<petevg> Cool. I'm creating the integration branch right now -- I'll just rebase when/if you give the go ahead to merge the glitch branch.
<cmars> bdx, hi, are you around?
<bildz> mtr green.5thcolumn.net
<cory_fu> petevg, bcsaller: Any objection to me moving all of the existing plugins (currently under tests/; save maybe for chaos) to matrix/plugins/ and switching to using entry-points for discovering plugins?
<petevg> cory_fu: no objection. I've already moved glitch there.
<cory_fu> bcsaller: Your current implementation uses a dot-path resolver, but I feel like entry-points are both more standard and easier to maintain
<cory_fu> But I'm up for arguments to the contrarry
<cory_fu> *contrary
<bcsaller> cory_fu: I am fine with entry points, maybe as well, but I find the dot-path style to be less friction
<petevg> glitch has a main.py that can be renamed to __main__.py, and it also drops the stuff it wants to expose into __init__.py, so it should work w/ either method.
<bcsaller> petevg: I am looking to merge your branch but have some changes I'd like to propose in code, I'll push a different branch in a bit, just wanted to let you know
<petevg> bcsaller: cool.
<cory_fu> petevg: I don't think main.py or __init__.py would matter to entry-points.
<petevg> cory_fu: I thought that entry-points really liked to have a __main__.py; I am most likely just being wrong :-")
<cory_fu> Nope
<cory_fu> They just need a pointer in some module's setup.py
<petevg> Got it.
<cory_fu> bcsaller: Having a dotted-path resolver kind of makes entry-points moot.  The purpose of entry-points was to make it so that everyone didn't have to implement their own dotted-path resolved.  ;)
<cory_fu> *resolver
<bcsaller> cory_fu: the setup.py part bothers me, I see this being our std tests + some included in the bundles test dir from which we'd resolve an additional plugin
<cory_fu> Plus to standardize how 3rd party libraries registered their plugins, so it wasn't just all ad-hoc
<bcsaller> in that context there may not be a setup.py
<cory_fu> Hrm.  Yeah, fair enough
<cory_fu> bcsaller: Ok, fine, I leave well enough alone.  :)
<bcsaller> ha
<cory_fu> I was just tired of seeing yet-another-dotted-path-resolver-implementation
#juju 2016-10-16
<narindergupta> i saw an juju 2.0 upload for trusty in stable ppa.  juju-core - 1:2.0.0-0ubuntu1~14.04.2~juju1	(changes file)	juju-qa-bot	2016-10-14	Published	Trusty	Devel
<narindergupta> hope this is not a mistake?
#juju 2017-10-09
<pjdc> r
<gnuoy> anyone got a sec to help with a network spaces issue? I have a neutron-api charm on 10.60.0.0/24 and 10.70.0.0/24 and a mysql instance on 10.70.0.0/24 and on the neutron-api charm I see:
<gnuoy> network-get --primary-address shared-db
<gnuoy> 10.60.0.5
<gnuoy> Why is juju deciding to return the 10.60 address?
<wpk>   
<wpk> what network-get shared-db shows (without --primary-address)?
<boolman> how do I change sysctl settings inside lxd containers on deploy?
<boolman> I'm using maas as provider
<gnuoy> wpk, sorry, I redeployed my setup. This what I'm seeing: http://paste.ubuntu.com/25707092/
<gnuoy> wpk, fwiw these are the other bindings on the neutron-api unit http://paste.ubuntu.com/25707152/
<junaidali> Hi guys, have anyone faced autocompletion issue with juju commands?
<junaidali> I'm getting this error when I press tab to autocomplete  a command
<junaidali> _juju_complete_2_0: command not found
<gnuoy> wpk, I redeployed with shared-db=internal bind for the neutron-api charm and that fixed it.
<zeestrat> junaidali: Which version of Juju are you running and how is installed (deb, snap)? Did this crop up after upgrading something or a new install?
<ybaumy> pmatulis: please ask. thank you
<junaidali> Hi zeestrat: It's 2.0.2-0ubuntu0.16.04.2 installed via apt
<junaidali> a new install
<zeestrat> junaidali: Some of the devs here might know a solution, but just FYI that 2.0.2 release in the main repositories is a quite old and it's recommend that you use the juju/stable PPA (https://launchpad.net/~juju/+archive/ubuntu/stable) if possible.
<junaidali> sure zeestrat, I will try that. Thanks
<zeestrat> junaidali: No problem. Also, in case you're interested and your environment allows it, the juju team recommends installing juju as a snap (see https://jujucharms.com/docs/stable/reference-install#ubuntu) which can be real nice.
<junaidali> zeestrat: do you think would there be any compatibility issues if I upgrade juju from 2.0.2 to the latest stable one (2.2.4)? I have a deployment with 2.0.2
<magicaltrout> you can snap install and have the apt installed version
<magicaltrout> I believe
<magicaltrout> rick_h: ping
<junaidali> magicaltrout: is the juju data directory (.local/share/juju) different for snap installed juju.
<junaidali> I think I'm a bit confused here, I will try it out and will see what happens
<zeestrat> junaidali: Assuming this is a production deployment, I'd recommended testing these things in dev first. You can run different versions of Juju on your machine (client) and on controllers (server), then upgrade the controller/models if you want. See also https://jujucharms.com/docs/stable/models-upgrade
<junaidali> Thanks, I did have a look at controller upgrade. I will stick to client update atm.
<hallyn> kirkland: seriously, i urge you to find a clever way to address the problem that googling something like 'juju proxy' gives you outdated results (i.e. for 1.22)
<hallyn> i predict it will cut down on adoption
<hallyn> So if juju bootstrap for a vsphere controller behind a http proxy keeps giving me 'No packaged binary found, preparing local Juju agent binary' does that mean i didn't specify the proxy in the right place?
<hallyn> I've set it in ~/.local/share/juju/clouds.yaml , but juju show-clouds doesn't show the updated info
<hallyn> juju set-env no longer exists ...  (couldn't tell it by google, see above :)
<hallyn> axw: \o
#juju 2017-10-10
<axw> hallyn: hmm. I think you may need to set it in your environment too, to affect the client process. i.e. env http_proxy=... juju ...
<axw> hallyn: BTW I'm waiting on a license to test against ESXi 5.1. the free version is too crippled to work with Juju
<axw> hallyn: from the looks of things so far, though, changing to hardware version 8 will be fine
<hallyn> axw: hm.  seems like no matter what i try, i get http://paste.ubuntu.com/25711441/ .  trying with http_proxy in env too now, though it does also need to respect no_proxy
<axw> no_proxy should be honoured
<hallyn> nope, i just keep getting the same thing...
<hallyn> "vm '*' not found" seems like it would stem from some confusion
<hallyn> axw: http://paste.ubuntu.com/25711792/
<axw> hallyn: yep, I don't know what that's about. can you please try 2.3-beta1? there have been significant changes to the vsphere provider since 2.0.2
<hallyn> axw: waht the...
<hallyn> i installed from snap intending to have 2.3-beta1
<axw> hallyn: $PATH order I guess?
<axw> oh, or maybe you just need to specify --edge
<hallyn> i did sudo snap install juju --beta --classic
<hallyn> ah yes /snap/bin/juju is 2.3-beta1
<axw> beta, that's the one
<hallyn> looking better
<hallyn> things are being done
<hallyn> juju-vmdks are being uploaded
<axw> hallyn: is this with ESXi 5.1?
<hallyn> oh, no.  this is with a DC that only has my lone 6.0 :(
<hallyn> i'll re-try with a 5.1 added in later, if this works
<chamar> curious, is it with the free ESXi version?
<axw> hallyn: ok cool. you'll need to modify the source (OVF) though for that
<hallyn> chamar: i don't think so.  there's licenses at any rate.  (i inherited the lab...)
<axw> chamar: I tried with vSphere Hypervisor 5.1 earlier, doesn't work. Juju wants to create folders and clone VMs, which apparently don't work with the free version
<chamar> Thanks.  Got the same result with the free hypervisor.  sadly.
<chamar> there's feature that are not available / enabled.  Same goes with MAAS... ho well.
<chamar> hum. removing a kubernetes-worker unit.. works well.  except it still appears in the k8s dashboard..humm
<hallyn> axw: so is there any point in trying with a 5.1?
<hallyn> sounds like no
<axw> hallyn: not without source changes, no
<axw> hallyn: did bootstrap complete on 6.0?
<hallyn> still running
<hallyn> axw: not without source changes to make it not try to upload the files, or do i really only need to change the min machine type?  (not sure based on waht you were telling chamar)
<axw> hallyn: just changing vmx-10 to vmx-8. the other stuff was to do with the free version
<hallyn> axw: ok, i'll hopefully try that this week then.  one holdup has been trying to find hte actualy source package to pull-lp-source :)
<hallyn> or am i gonna have to learn how to do the snap thing
<axw> hallyn: git clone https://github.com/juju/juju/, then: make JUJU_MAKE_GODEPS=true install
<axw> requires go 1.8+
<axw> develop branch will become 2.3-beta2
<hallyn> cool thanks
 * hallyn sets up a build env
<hallyn> say, can no_proxy be a subnet?
<axw> hallyn: from 2.3-beta1 onwards, yes: https://github.com/juju/juju/pull/7885
<axw> I suspect there's some gotchas when it comes to external processes though, when juju shells out to wget/curl/etc., because it's non-standard
<hallyn> axw: nifty
<hallyn> ok, juju build going.  can i just scp the built ~/go/bin/juju over, or do i needmore?
<hallyn> well the other juju bootstrap is still going.  new juju is built - i assume i'll have to rebootstrap?
<hallyn> will deal with it in the morning
<hallyn> thanks axw!
<hallyn> \o
<axw> hallyn: no worries, let me know if I can help any more. you'll need to scp the juju and jujud binaries, and yes you'll need to rebootstrap
<axw> hallyn: (I assume you mean scp to wherever you're bootstrapping from - you can't just just copy over the top of the binaries in a bootstrapped environment)
<axw> also, seems like a long time for bootstrap - might be borked. if you can ssh to the VM, /var/log/cloud-init-output.log should tell you what's happening
<axw> assuming it got that far
<xavpaice> is there any way for an lxd unit to know what the hostname of it's parent host is?  Would be handy for exporting nagios checks.
<hallyn> axw: /var/log/cloud-init-output.log on which ohst?
<bdx> on CMR, what things do we want to relate across models, should we only be concerned with logical groupings?
<bdx> for example
<bdx> I have an web application deployed to web-app-model, and a monitoring stack deployed to monitoring-model
<bdx> rick_h: for example, lets say I have the prometheus monitoring stack described in your blog deployed to monitoring bundle
<bdx> so I have this telegraf subordinate component
<rick_h> bdx: so mentally (and you can see it based on the status output) we think folks will basically have SaaS-like setups
<bdx> part of me wants to deploy telegraf to my "web-app-model", and make the CMR to prometheus in the "monitoring-model"
<rick_h> so a model will be the bits needed to offer up a SaaS endpoint (or DBaaS) and such
<rick_h> bdx: exactly
<rick_h> bdx: so you want the subordinate on each thing (many) but only one prometheus gathering the data
<bdx> the other part of me wants to deploy telegraf to the "monitoring-model", and make the CMR from the web server in web-app-model to telegraf in the "monitoring-model"
<bdx> rick_h: missing the point
<bdx> rick_h: see what I'm saying
<rick_h> bdx: k, sec sorry otp with folks on your models issue and trying to do two things at once
<bdx> ok, no rush
<bdx> lol thx
<rick_h> bdx: there might be a temp way to improve your models call until we can get some  updates into juju-core and new releases/etc. So was just seeing how we can make that happen.
<rick_h> bdx: but ok, phone over. /me rereads
<rick_h> bdx: so, I think that you'd put telegraf in the webapp model
<rick_h> bdx: you want that model to say that things are in fact being wired up to be monitored. telegraf is installed on each of those machines. It's a subordinate and does not effect the number of VMs and such in the web-app-models
<bdx> totally
<rick_h> bdx: and it'll be a LOT easier to see which future models have telegraf setup vs not
<rick_h> bdx: that's how my brain thinks anyway.
<bdx> right
<bdx> rick_h: so, the way I was thinking about it was, if a user needs something monitored, I just grant access to the telegraf:juju-info endpoints
<bdx> to that user
<bdx> telegraf is already related to prometheus in the monitoring-model
<rick_h> bdx: thinking...for some reason I really don't like subordinate realtions over CMR...but I'm not 100% sure why
<bdx> so then I could essentially gate which users could monitor things by granting access to the telegraf:juju-info endpoint
<bdx> yeah, feel you on that
<bdx> rick_h: possibly because we want the charm to live on the controller for which model its being deployed into
<rick_h> bdx: so...but at that point you're locked into telegraf
<rick_h> bdx: vs just "send stuff to prometheus"
<bdx> ahh, I see
<bdx> yeah
<rick_h> bdx: so if you used anything else, you'd need those setup as well
<bdx> totally
<rick_h> bdx: I think it's because prometheus is basically a database. I'd want to control access to the DB, not which apps are already wired to the DB
<bdx> entirely
<bdx> that makes sense
<rick_h> bdx: so I think what you're asking "would work" but it just feels really off in my head
<bdx> yeah, its does now for me too
<bdx> rick_h: thanks for being the voice of reason here
 * rick_h writes that down on the calendar "voice of reason!" :P
<bdx> I'm hooking up my first live CMR deploy with monitoring decoupled from the web app stuff, and the db decoupled from the web app stuff too, pretty exciting this is finally happening
<bdx> I'm expecting it all to work, 1st try, using beta2
<bdx> jp
<bdx> :)
<bdx> high hopes
<bdx> ^^
<rick_h> bdx: know that the prometheus charm needs an update to use the new netwokring stuff. I'm working on tests against charm-helpers to add the tooling for it
<bdx> ohhhh niceee
<bdx> rick_h: thats separate from CMR though right? or are they linked? like if you use CMR then you have to use the new network-get too?
<rick_h> bdx: so if you relate telegraf to prometheus over CMR, prometheus needs to use the public IP vs the 10.x one of the vm
<rick_h> bdx: right now the prometheus charm asks for the relation-get private address
<rick_h> bdx: vs using network-get -r and that will be CMR aware and provide a public address
<stub> rick_h: I'd say that subordinates, by definition, are in the same container as their primary. And splitting the container over two models seems wrong.
<rick_h> stub: +1
<bdx> rick_h: that makes sense
<bdx> rick_h: what about the scenario where the models are in the same vpc
<bdx> ooooh
<bdx> I see, thats where the new network-get functionality comes in
<bdx> you can now make your charm choose which network interface to get the relation info for, so if you want to set the private interface info, then you can
<rick_h> bdx: right, so network-get is all "bindings aware" and provides more full featured network dump
<rick_h> bdx: so if I can get this test to pass I'll have a PR for new network_get() charmhelper to use for that
<bdx> oh nice, I think I see .... what you are working on is a wrapper in charmhelpers for the new network-get that will allow us to access the new functionality via the python api
<bdx> but is that only for "bindings", or does it differentiate between public vs private address too?
<bdx> e.x. aws instance
<bdx> deployed to an a subnet in which it gets a public ip
<bdx> will have an private and public address, but may not use bindings, and may not be deployed to spaces via constraint
<bdx> then deploy another charm to another model (in the same vpc/private address space, just a different model) and relate those two charms via CMR
<bdx> what will happen? how do I control this?
<rick_h> bdx: https://github.com/pmatulis/juju-docs/blob/00f06dfa4f62020e5598253f0b066af9610df032/src/en/developer-network-primitives.md
<rick_h> bdx: making up some lunchables so in and out atm but give that a read
<rick_h> bdx: so in the meantime, you have to manually edit the prometheus config with the proper addresses for prometheus to reach telegraf across models...but hopefully that's not true by the EOD today
<bdx> nice, so your wrapper will basically give us access to all of the things that I'm concerned about I think
<bdx> great
<pmatulis> rick_h, https://jujucharms.com/docs/devel/developer-network-primitives :)
<rick_h> bdx: right
<rick_h> pmatulis: ty :)
<rick_h> Had the tab open a while. Heh
<pmatulis> ha ha
<bdx> so
<bdx> "Both ingress address and egress subnets may vary depending on the relation. This is because if the relation is cross model, the ingress address is the public / floating address of the unit to allow ingress from outside the model. And a given relation may see traffic originate from different egress subnets."
<rick_h> bdx: exactly
<bdx> rick_h: ok, so this better exposes what I'm concerned about
<bdx> "if the relation is cross model, the ingress address is the public / floating address of the unit"
<rick_h> So what we need to do is test this out and make sure in a vpc it behaves
<bdx> rick_h: so a common network setup/use case I use is to have multiple models in the same vpc
<rick_h> bdx: and file bugs and feedback during the betas on it
<bdx> right
<rick_h> If I follow your concern through
<bdx> like, I want to monitor things from one model to the next
<bdx> I don't want default talk over WAN
<rick_h> Yea, and no expose anything in the internet you don't have to
<bdx> "if the relation is cross model, the ingress address is the public / floating address of the unit"
<rick_h> +1 so we've got to help test and build clear rules for how juju can"do the right thing"
<rick_h> bdx: that might involve specifying binding of endpoints to put things together clearly
<bdx> for me this means all of my monitoring cross talk and database <-> web app cross talk that I want to stay inside the vpc will automatically be forced over the wan if its CMR
<bdx> righ
<bdx> possibly^ is worded incorrectly
<rick_h> bdx: so what I'm saying is that might be the default behavior.
<bdx> oh
<rick_h> bdx: but, if you deploy and bind the endpoints to the internal vpc networks
<bdx> got it
<rick_h> bdx: then perhaps that overrides the default WAN behavior?
<bdx> I see
<rick_h> bdx: so that's my "you might have to set things up more clearly"
<bdx> ok, I'm tracking
<rick_h> bdx: and if that's failing, then we file bugs and ask wallyworld to help us out :)
<bdx> got it
<rick_h> bdx: so mentally, I'd expect it to WAN so that working > * as a default behavior.
<bdx> right
<rick_h> bdx: but that manually targeting the network paths through endpoint binding would do what an experienced user wants to be done
 * rick_h starts disclaimer'ing that he's not tested that out atm though....sooo....
<rick_h> any charmer folks know what error I'm getting here? https://pastebin.canonical.com/200266/
<rick_h> kwmonroe: ^
<cory_fu> rick_h: What channel of charm-tools are you using?
<rick_h> cory_fu: I'm trying to use a custom build to test out the PR https://github.com/juju/charm-helpers/pull/20 in a charm
<rick_h> cory_fu: so I did a make source in charmhelpers, updated the wheelhouse charmhelpers tar.gz to try to use my patched version
<rick_h> cory_fu: maybe there's an easier path but not sure how this balance works out
<cory_fu> rick_h: A custom build of charm-tools to test a charm-helpers change?
<rick_h> cory_fu: sorry, for charm-tools I'm using edge channel of charm
<rick_h> cory_fu: so I just did a charm pull prometheus and attempted to build it. I ended up doing a --no-local-layers --force to get it working enough to move forward.
<cory_fu> rick_h: So, from your pastebin, it's picking up the current directory as the source of the prometheus interface layer for some reason
<cory_fu> rick_h: (Note the "(from .)" at the end of the pastebin)
<rick_h> cory_fu: yea, I wasn't sure why there. I tried to unset the interfaces path
<rick_h> cory_fu: but it continues to think so
<cory_fu> It might actually be because you *don't* have INTERFACES_PATH set.  I'm going to test that.
<cory_fu> We should probably just skip any local interface layers if the path isn't set
<cory_fu> Or have better detection about what local path is an interface path
<rick_h> cory_fu: so originally it was set to my interfaces path, but to build this charm I didn't need any so I unset it in an effort to make it work out.
<cory_fu> Nope, having it unset does the right thing for me, too
<rick_h> cory_fu: k, I'm feeling my way through the best practices on working on these tools/charms and stumbling a bit. I assume I'm holding it wrong so curious what folks say when I hit stuff
<cory_fu> What was set to your interfaces path?
<rick_h> export INTERFACE_PATH=$JUJU_REPOSITORY/interfaces
<rick_h> echo $JUJU_REPOSITORY
<rick_h> /home/rharding/src/charms
<cory_fu> Yeah, that should be fine.  You don't have the prometheus charm checked out in that interfaces sub directory, do you?
<rick_h> and the charm is in the charms directory ".../charms/prometheus"
<cory_fu> rick_h: Ah!  That let me reproduce it.  I have my layers in a layers subdir (e.g., ~/charms/layers/prometheus).  Putting it directly into JUJU_REPOSITORY causes it to break
<rick_h> cory_fu: no, the only interface in the interfaces path is grafana-source. No other directories in there
<rick_h> cory_fu: oic, so yea holding it different than everyone else :)
 * rick_h has to run to get the boy from school, biab
<cory_fu> rick_h: I'll open a bug for this
<cory_fu> It should definitely be doing the right thing here and it's not
<zeestrat> cory_fu: are you the right person to bother for some questions regarding charm tools?
<zeestrat> I am having a bit of a hard time figuring out the intended/preferred way to use charm tools when developing some charms in regards to which distribution/version to use. Asked on the mailing list (https://lists.ubuntu.com/archives/juju/2017-October/009553.html) but not much luck.
<cory_fu> zeestrat: marcoceppi would probably be better to answer that question, because I'm not really clear on the versioning there, either.  I suspect that the versioning has just fallen out of maintenance since we moved to snaps as the preferred deployment and snap revisions already handle that need to some degree, but it definitely needs to be cleaned u
<cory_fu> p.
<rick_h> cory_fu: kwmonroe is there a way to get the relation_ids just from juju run xxxx ?
<cory_fu> rick_h: juju run --unit <unit/0> -- relation-ids <endpoint-name>
<rick_h> cory_fu: gotcha ty
<rick_h> cory_fu: I figured they looked numeric and started with 0 and then 1 and found it :)
<zeestrat> cory_fu: Thanks. I'll try to ping him then. I'm already using the snaps in my dev environment which works great, but when I try to run some tests with bundletester as recommended in https://jujucharms.com/docs/stable/developer-testing, bundletester pulls and old charm-tools from PyPI which is a bit frustrating. How are y'all testing these charms with bundletester?
<rick_h> cory_fu: for getting a review and feedback on https://github.com/juju/charm-helpers/pull/20/files is there anything I should do?
<rick_h> cory_fu: that's going to hold up changes to the prometheus charm to leverage the updated networking information.
<cory_fu> zeestrat: Sorry for the delayed response.  marcoceppi will update PyPI to fix bundletester and we'll look in to getting things updated to be more consistent.
<cory_fu> rick_h: Merged
<rick_h> cory_fu: ty
#juju 2017-10-11
<Akshay> Hi All, while deploying openstack charm(pike bits) keystone charm is failing at shared-db relation with error "keystoneauth1.exceptions.http.InternalServerError: Internal Server Error (HTTP 500)"
<Akshay> can someone please help me here
<Akshay> i am refering charms from location: https://jujucharms.com/u/openstack-charmers-next/openstack-base-xenial-pike/
<EdS> Yo Tim :) this is awesome! https://insights.ubuntu.com/2017/10/11/private-docker-registries-and-the-canonical-distribution-of-kubernetes/ Am I right in thinking that this places the docker registry IN k8s? Would you have any advice for/against considering just adding a docker registry to our juju bundle definition of CDK?
<EdS> I think I can figure most of it out, I'm just so early in the process of getting to grips with k8s/juju/maas I've been planning what to do almost as you're writing up "nearly what I want"!
<EdS> Although, I suppose your method is much more efficient on the use of hardware. I like that.
<BlackDex>  /win 3
<jamesbenson> morning all
<rick_h> juju show, juju show...everyone loves the juju show...10min
<rick_h> participation linky : https://hangouts.google.com/hangouts/_/52rzxhpdfff6fobzkl6vyf7txqe
<rick_h> and watching linky: https://www.youtube.com/watch?v=ZJG_1-ulGvI
<rick_h> 2min warning
<mattrae> hi juju reload-spaces with 2.2 is not properly getting changes to spaces from MAAS. juju spaces shows a subnet ended up under a wrong space, and also the cidr isn't getting updated. I'd like to repair the environment, is it ok to update the subnets and spaces collection to manually fix the enviornment until issues with reload-spaces are resolved? https://bugs.launchpad.net/juju/+bug/1705767
<mup> Bug #1705767: reload-spaces doesn't update space names. <network> <spaces> <sts> <juju:Triaged> <https://launchpad.net/bugs/1705767>
<mattrae> is it enough to update subnets an spaces collection to manually fix juju spaces? https://pastebin.com/eyVDJUja
<rick_h> wpk: jam ^ ?
<rick_h> mattrae: isn't there a command to update them from MAAS?
<jam> rick_h: it doesn't handle if you move a subnet from one space to another
<jam> we notice new ones
<jam> but we don't update existing ones
<rick_h> jam: oic, yea reload-spaces I was thinking of
<jam> there is a risk associated with moving a subnet if that subnet was in use
<jam> cause then apps that were using it are suddenly in another space
<mattrae> yeah the changes to the spaces/subnets in maas are not all getting reflected in juju spaces
<jam> mattrae: are these subnets in use, or not in use yet?
<jam> if they aren't in use, then probably updating the DB is ok
<mattrae> jam: yeah the subnet is currently in use, some units are bound to it right now i believe
<jam> mattrae: in which case, what will happen when you move it into another space is very much undefined behavior
<jam> mattrae: applications will have been configured to use specific IP addresses for particular use cases, etc, and suddenly those are no longer the right ones
<jam> which is why we didn't do all the work in refresh-spaces, because it has potentially long tails
<mattrae> jam: cool sounds good, i'll have to do some testing of the behavior in this environment
<rick_h> cory_fu: have a sec? I'm trying to figure out my best path forward for updating the prometheus charm with the updated trunk of charmhelpers then. I manually built using make source and copied it into the charm, but that's not going to work for the actual charm source.
<atrius> Hello all. I'm seeing something with a recently installed set where jujud is consuming 300 - 600% cpu time on a 32 core system
<atrius> I imagine that isn't expected?
<atrius> And now mongo is consuming 2600% O.o
<jamesbenson> impressive, I'm still struggling to get juju to install.
<pmatulis> jamesbenson, what's wrong?
<atrius> The install was easy.. the consuming all the resources in sight was less easy/good
<jamesbenson> atrius: I installed maas and have that working properly
<jamesbenson> but getting juju to function is difficult.  I just fixed some networking issues
<jamesbenson> but if I can ping you perhaps tomorrow that might be nice :-)
<atrius> jamesbenson: Honestly, I just used snap and then conjure-up
<atrius> Maybe that was my mistake since doing it that way seems to have resulted in.. interesting.. results
<jamesbenson> well I actually am trying conjure-up currently
<jamesbenson> but seems like nothing was actually setup...
<jamesbenson> we are trying to tie it into maas here... both juju and conjure.
<stokachu> jamesbenson:hit me up if you have questions
<jamesbenson> thank you!  I will take you up on that.  Where are you based out of?
<bdx> rick_h, cory_fu: I want to know too!
<rick_h> bdx: so https://code.launchpad.net/~rharding/prometheus-charm/+git/prometheus-charm is my MP that with the updated charmhelpers that should work. I've tested it in non-CMR setup and will test it in a CMR setup later tonight
<rick_h> bdx: but in case that's useful to see how to use the new relation data and network-get stuff.
<bdx> rick_h: great
<bdx> thanks
<bdx> but how do you get that charmhelpers branch into your charm?
<bdx> do you have to build the sheel manually and put it in there?
<bdx> wheel*
<rick_h> bdx: so I did ancharm build, and from checked out charmhelpers branch did a 'make source' and replaced the charmhelpers in the wheelhouse directory of the built charm.
<bdx> ok, thats what I thought, perfect
<bdx> thx thx
<bdx> rick_h: https://github.com/jamesbeedy/interface-redis/blob/master/provides.py#L23
<bdx> its that ^ which I will want to be replacing with network_get() right?
<bdx> anywhere where unit_get('private-address') is used
<rick_h> bdx: yes
<rick_h> You want to use ingress address
<bdx> totally
<bdx> but I think we didnt come to a conclusion about what happens when you want ingress from your private address space
<bdx> because by default you get the public
<bdx> right
<bdx> so, when you only have private address space it should fall back to it though
<bdx> is what jam was saying I think
<rick_h> bdx: yea the result is try it and see what is in the network info returned
<bdx> totally
<bdx> rick_h: so, its not really clear how I would use that in a interface, I need to pass it a relation_id obtained from the conversation ?
<bdx> similar to https://github.com/jamesbeedy/interface-memcache/blob/master/requires.py#L36
<bdx> kind of
<bdx> ?
<bdx> or
<bdx> I guess I see the other input param for relation id there
<bdx> so, basically ... not really sure how I will get the endpoint name from the interface perspective
<bdx> to be able to pass into network_get()
<bdx> rick_h: possibly its an architectural flaw of mine in the interface design
<bdx> and I possibly shouldn't be trying to get the local info about the node like network_get() in the interface
<bdx> otherwise, I just think that the interfaces will need to be passed an extra argument by the layer
<bdx> such that the interface knows what the endpoint name is
<bdx> because the name is arbitrary at the layer level right?
<bdx> so yeah, make it something thats passed in as relation info im thinking
<bdx> oh, or I could just make the call to network_get() in the layer, and pass the 'host' info into the relation as relation info
<stokachu> jamesbenson: etc
<jamesbenson> ?
<stokachu> jamesbenson:east coast
<stokachu> US
<jamesbenson> ah, okay
<jamesbenson> thanks :-) CST here
#juju 2017-10-12
<rick_h> bdx: sorry, was at the boy's soccer game. you get setup?
<bdx> rick_h: yeah, g2g
<bdx> rick_h: so am I right to think that the way an interface sets host ip info (as many do), say on the provides side of the relation
<rick_h> bdx: cool, I'm testing out the CMR version of prometheus right now. /me just made the relation and hopes this works out
<rick_h> bdx: sorry, I didn't parse that with a question in it?
<bdx> should now be done differently then the now legacy way
<rick_h> bdx: yes, I think so
<rick_h> bdx: a lot of charms I think will need updating to work in a CMR world
<rick_h> bdx: so once I prove this works I want to get a couple of solid examples and start making a LOT of noise to charmers
<bdx> yeah,
<bdx> so
<rick_h> and damn it didn't work.../me goes debug-log chasing
<bdx> this is my mod of the http interface https://github.com/jamesbeedy/interface-http/blob/network_get/provides.py#L20
<bdx> such that it only allows the providing layer to set those values
<rick_h> oh, dammit I didn't deploy my forked charm whoops
<bdx> instead of https://github.com/jamesbeedy/interface-http/blob/master/provides.py#L22
<bdx> ^ the old way
<bdx> where the ip address is obtained and set in the interface code
<bdx> vs my new way
<bdx> or
<bdx> the* new way
<bdx> where we use network_get()
<rick_h> bdx: hmm, yea I'm not sure on that part. I've found that confusing in my work as well the line between work done in the layer vs the reactive bits of the charm itself
<bdx> so
<bdx> rick_h: https://github.com/jamesbeedy/newcharm/blob/master/reactive/newcharm.py#L22
<bdx> ^that worked for me
<bdx> with the network-get branch of the interface-http from above^
<bdx> but I'm not sure if that the way we *want* to be doing it
<bdx> I could have passed the interface name into the relation data, then used network_get() in the interface code to parse there
<bdx> but that just doesn't seem right
<bdx> possibly the interface name is already available in the conversation
<rick_h> bdx: hmmm, but is the interface name defined by the layer?
<bdx> yeah
<rick_h> bdx: I mean if you've got the mysql layer you know you're speaking db already
<bdx> oh
<bdx> I see
<bdx> so the provides side of the interface can have a static defined interface name, because we know what it is in the providing layer
<bdx> and then the provides.py can just define that
<bdx> and use it in network_get()
<bdx> that sounds a bit nicer
<bdx> rick_h: like this https://github.com/jamesbeedy/interface-http/blob/network_get/provides.py#L20
<bdx> ?
<bdx> rick_h: ^ doesn't work
<bdx> because we don't know that the layer will name the interface http
<bdx> the name that the layer gives interface-http is defined in layer.yaml
<bdx> so I just don't see a legitimate way of hardcoding it in the interface code
<bdx> the only case where ^ works is if you name the http interface http in your layer
<rick_h> bdx: hmmm...yea
<bdx> see what I'm saying
<rick_h> bdx: and the relation id is going to be needed in cases where you relate more than once as well. This seems messy
<bdx> right
<rick_h> lol...damn this is cursed
<rick_h> so I have this updated prometheus charm
<bdx> rick_h: yeah, I read through your changes
<rick_h> and it's target endpoint is offered...but now because it's offered prometheus auto adds itself as a target.
<rick_h> so it's loading itself using the public addresss since it's offered as a CMR endpoint
<rick_h> which then fails because prometheus isn't exposed (even to itself over the public IP)
<rick_h> so "working as intended" ... but broken
 * rick_h beats head on the wall for inspiration
<bdx> I see
<bdx> rick_h: what is ever done with "address" in write_prometheus_config_yml()
<bdx> I see its set, but never used
<bdx> a bit confusing for sure
<rick_h> bdx: so it's used to write out the prometheus.yml file later in that function
<bdx> oh
<bdx> I don't see the var "address" used again ... bad eyes
<rick_h> bdx: it's not part of that diff, it's used with private_address key in options later in that function
<rick_h> bdx: oh lol, I didn't update that
<bdx> ok
 * rick_h must have done some bad copy/paste/etc
<bdx> lol,
<rick_h> 'private_address': hookenv.unit_get('private-address'),
<rick_h> that's supposed to be the address bit
<bdx> yeah
<bdx> thats what I thought
<rick_h> somehow missed that. I started with a charm pull (no git bits) and made changes/etc and then tried to merge them to a git clone from LP and must have missed it somehow
 * rick_h is stumbling around charming like a noob still
<bdx> so I think (for the case of http interface) it seems most sane to use network_get() in the providing layer, and set the ip info on the relation from the provides
<bdx> me 2
<bdx> {'ingress-addresses': ['107.22.129.65'], 'bind-addresses': [{'addresses': [{'cidr': '172.31.48.0/20', 'address': '172.31.51.59'}, {'cidr': '172.31.48.0/20', 'address': '172.31.51.59'}], 'interfacename': 'eth0', 'macaddress': '12:ba:53:58:9c:52'}, {'addresses': [{'cidr': '252.48.0.0/12', 'address': '252.51.59.1'}], 'interfacename': 'fan-252', 'macad
<bdx> dress': '1e:a2:1e:96:ec:a2'}]}
<bdx> rick_h: we should work on coming up with a strategy for parsing ^ depending on what you want
<bdx> for example, I want the private network interface ip for the instance
<bdx> theres no clear way to distinguish which of ^ addresses it is ....
<bdx> I know
<bdx> you could check for interface names in the dict
<bdx> and then decide to do based on what interfaces you find
<bdx> but ^ so hacky
<rick_h> bdx: yea, the key thing is I need to better understand what could possibly come out of that dict
<rick_h> bdx: atm it's not 100% clear and what's useful vs not as useful. I assume later updates to the code can do things like add additional kwargs to the network_get to be more useful
<rick_h> bdx: so that's why I've started out just dumping all the data back to start
<bdx> yeah, I'm doing the same
<bdx> good call
<bdx> rick_h: what about a "default" space
<rick_h> ?
<rick_h> bdx: sorry, getting late here and I'm slow and not following
<bdx> sorry, no, its my bad, same
<bdx> what I'm digging at here, is how to get what I want from the network_get() output in a no-nonsense way
<rick_h> bdx: yea, I think the key is going to be that the charm has to be deployed with the interface bound to a space.
<bdx> entirely
<rick_h> bdx: but going to punch out for the night. good luck and let me know how it goes!
<bdx> kk
<bdx> thanks
<rick_h> np, ty for banging on stuff
<Akshay> Hi All, can someone please help me here. We at Veritas Technologies LLC are trying to deploy juju based openstack pike bit but the deployement is failing for mysql charm at hook "shared-db" with error "keystoneauth1.exceptions.http.InternalServerError: Internal Server Error (HTTP 500)"
<Akshay_> Hi All, can someone please help me here. We at Veritas Technologies LLC are trying to deploy juju based openstack pike bit but the deployement is failing for mysql charm at hook "shared-db" with error "keystoneauth1.exceptions.http.InternalServerError: Internal Server Error (HTTP 500)"
<burton-aus> Hi Akshay_ would you mind to subscribe this mailing list then send your question there with details? juju@lists.ubuntu.com
<burton-aus> Akshay Akshay_ the subscription link is: https://lists.ubuntu.com/mailman/listinfo/juju
<Akshay_> thanks for the help Burton, I will try that.
<burton-aus> Akshay_ no worries.
<Akshay_> Hi Burton, I have subscribed to the mailing list, after how much does it generally gets verified/confirmed?
<Akshay_> As my mails are getting rejected for now
<burton-aus> Akshay_ let me ask admin to approve it for you.
<skay> how can I get the set of current flags during tests? I'd like to do something similar to wait_for_messages from amulet
<cory_fu> skay: There's no API for that defined directly in Amulet, but you can use something like yaml.loads(unit.run('charms.reactive get_states --format=yaml')).keys()
<skay> thanks
<cory_fu> skay: kwmonroe just reminded me that you should probably use get_flags instead, which I think would make the .keys() bit pointless as well
<skay> ack, thanks!
<skay> I'm forcing an error in my reactive method that handles a flag being set, and my status shows this: (upgrade-hook) my error message
<skay> I've been doing that live, but now I want to get an amulet test going so that I can figure out why that's happening
<skay> I'd prefer to give a status to the operator that the failure is inside my method and not due to a failure in a hook
<skay> that's another problem I've been having
<skay> also, TIL I shouldn't use an error status, I should use blocked.
<skay> hmm, I've let it run for a while and the status has settled down and no longer prefixes it with the hook name. I can make due to prefixing my error statuses by hand for now if I want that
<kwmonroe> skay: i'm not entirely sure what you're asking -- are you having trouble setting/catching status in an amulet test?
<kwmonroe> and yeah, +1 on blocked vs error (when you're actually blocked, of course :)
<skay> kwmonroe: no. I was having trouble with the status appearing how I wanted it to
<skay> kwmonroe: I was mistaken, end of problem.
<kwmonroe> w00t - the end of a problem is the best part.
<skay> kwmonroe: for a while, it looked like the status I had set was appended to "(upgrade-hook) <my status string>"
<skay> carry on :)
<kwmonroe> heh
<ryebot> What does the offline story for juju look like?
<ryebot> Are there any docs for offline installs?
<bdx> I'm wondering about a story too, does lib-juju have support for JAAS (obtaining/discharging the macaroons) yet?
<cory_fu> skay: The hook name prefix is just to indicate which hook is currently executing and should not be considered when matching status messages in Amulet
<skay> ack
<cory_fu> It's automatically added in the tabular format status by Juju, but isn't actually part of the message.  You can see the raw message with `juju status --format=yaml [application]`
<razorz> Anyone seen an issue with using juju/lxd on an ubuntu server box hosted on an ESXi box and trying to do bridging? lol  I got it working last night on my home machine (bare metal) in like 5 minutes, but on this server, even after a clean install, it's still a mess..
<bdx> razorz: ensure to set promiscuous to 'true"
<bdx> on your esxi interfaces
<bdx> razorz: and also on the vm interface too
<bdx> not sure thats your problem
<bdx> but thats where I would start
<razorz> bdx: I'll check that, thanks
<bdx> np
<razorz> bdx: very possible promiscuous was the issue
<bdx> razorz: cool, yeah .... I know its disabled by default, and an easy thing to miss. glad you got it working
<razorz> bdx: I literally spent 15 hours on this.. creating new bridges, breaking LXD totally to the point I had to reinstall the OS and it was that dumb lol
<bdx> razorz: lol oh man, at least you gave it a good go
<bdx> razorz: "even after a clean install" - these were your key words
<bdx> lol
<bdx> razorz: I feel your pain with that though, I cant even count how many times that had bitten me with "clean installs" of esxi
<razorz> bdx: well I've been so used to dealing with nothing but ESXi and basic Linux crap and of course windows.. and I do all the Cisco routing here as well. I thought i had a routing issue going on
<razorz> and that I was oblivious to the way that Linux handles bridges
<razorz> but when it worked at home in a few minutes I was dumbfounded.. then came here and did the exact thing and it timed out, wanted to punch my screens
<bdx> haha, right
#juju 2017-10-13
<wpk> Is it possible to deploy Kubernetes with conjure-up to a non-default VPC on EC2?
<mark-dickie> Hello all! I'm quite new to juju and am writing a charm which utilises layer:snap but it never seems to actually install the snap. Is there anyone here who might know what I've done wrong.
<boolman> I'm having issues with a charm, https://github.com/MartinHell/charm-collectd/blob/6338fe9d99d8c8c4f510cff28cf617aebdd6f901/reactive/collectd.py#L220  "AttributeError: module 'charmhelpers.fetch' has no attribute 'archiveurl'"
<boolman> nvm i fixed it
<EdS> hello :) I've brought up canonical-kubernetes using juju, after having conjure up fail. I think I was left with a kubernetes setup that has lots of the settings as per the defaults with conjure up. Would someone be able to advise, for example, how I'd repeat this process to get the kubernetes "external" IPs to be in a subnet of my choosing?
<EdS> If it makes any difference, we're hosting this ourselves and it's all provisioned through MAAS
<kjackal_> hi EdS, I can give it a try
<EdS> hi kjackal :) thank you!
<kjackal_> EdS: you are deploying canonical-kubernetes
<kjackal_> what do you mean by "external" Ips?
<EdS> ok, sorry for my terminology. I mean the IP addresses assigned to services that I expose.
<kjackal_> ok how do you expose the services? nodeport?
<EdS> the kubernetes cluster can "expose" a service and it is then assigned an "external ip"
<EdS> however, I've never seen anywhere where I can define the CIDR for these addresses
<EdS> Conjure up appeared to allow me to set the desired properties of kubernetes, but did not work.
<EdS> Juju has worked really smoothly, but I missed out on all the tweaking that would make this new kubernetes cluster usable to us!
<EdS> yes, I have exposed the first test service with nodeport
<EdS> and I have ended up with a seemingly random IP 10.1.63.10
<kjackal_> the 10.1.63.10 is one of the kubernetes nodes, right?
<EdS> no, the nodes are on 10.10.10.0/16
<EdS> ah, sorry, that's the IP of the pod
<kjackal_> k8s has a service-cidr config variable
<EdS> ok, brilliant, that sounds like the right thing.
<kjackal_> can you show me a juju config kubernetes-master
<EdS> the exposed service, if I read this right, is 10.152.183.97
<kjackal_> that sounds better because the service-cidr has a default value of: 10.152.183.0/24
<EdS> aha ok
<EdS> so, I think the question is now much simpler. Do you know how to set that? :p
<kjackal_> but you cannot change the service-cidr after the initial deployment
<EdS> ok, that's fine
<kjackal_> you will need to redeploy k8s
<EdS> how would I set it, at all?
<kjackal_> give me a sec looking for the documenttion page
<kjackal_> aaah it will be faster if I just tell you
<EdS> I think, with juju it was so smooth it felt like magic (ok, so it's in the name) that important things like this were missed (at least for me) because of my half-success with conjure up perhaps leaving config around? If that's even possible to happen? IDK
<kjackal_> thats a good suggestion
<kjackal_> so what we will do is to grab the bundle from the store change the config variable and deploy it
<EdS> ok, I have that already as I had to tweak constraints
<kjackal_> can you do a "charm pull canonical-kubernetes"
<EdS> :)
<kjackal_> awesome
<kjackal_> so you go under the kubernetes-master service and you set the service-cidr to what you need
<kjackal_> let me do this here so I tell you exactly how this looks
<cory_fu> EdS: When you say that conjure-up failed, can you give me more info?  I don't know much about the k8s side, but I'd like to sory out any issues with conjure-up at least.
<kjackal_> EdS: it should look like this: http://pastebin.ubuntu.com/25732311/
<EdS> oh wow :)
<EdS> ok will give that a shot.
<EdS> Cory, two seconds. :)
<EdS> cory_fu: I have a feeling that I was running into several things at once. I'm hunting a few tickets
<EdS> first one; too many machines used, so it ran out of machines to provision
<EdS> like this: https://github.com/conjure-up/spells/issues/67
<EdS> except our scenario was less extreme than 4->18
<cory_fu> We just had a discussion yesterday about having Juju do better about verifying MAAS / cloud limits / availability early on.  :/
<EdS> thanks so much for your help kjackal, that had eluded me for ages
<EdS> lol yeah, might help me out. I unpacked a lot of extra machines trying to get around this
<EdS> but got it going in the end.
<cory_fu> EdS: Odd.  I thought that the "too many machines" bug was resolved already.  Any chance you still have the ~/.cache/conjure-up/conjure-up.log file?
<EdS> while you're here... can you satisfy an enquiring mind? did my conjure-up attempts store configuration that was used in a subsequent attempt with juju and a bundle file I specified myself? Or am I over thinking this?
<EdS> This wasn't exactly in the last few days. I can go digging and see if I have it.
<EdS> ooh lots of evidence :/
<EdS> shall I pastebin?
<cory_fu> EdS: Not currently.  If you don't go past the "Configure Applications" screen and click the "Deploy All" (or every individual deploy) button, nothing will get saved
<cory_fu> Well, technically, we were planning on having a resume feature, so we might persist choices into a sqlite db in that ~/.cache/conjure-up directory, but they're never read in again
<cory_fu> EdS: Yeah, pastebin of the log would be helpful.
<EdS> ok, thanks, that clears up a few doubts
<cory_fu> jam: Hey, can you confirm if a unit's IP address changes due to DHCP whether Juju would trigger a config-changed hook?
<EdS> my conjure-up log... sorry about many times I tried this... http://pastebin.ubuntu.com/25732388/
<jam> cory_fu: so we trigger config-changed on startup anyway, but I'm not 100% sure about where we ended up from auto-populating private-address with new values because of charms that override the value. (openstack charms used to set the VIP instead of their personal addresses)
<jam> that said, if a live machine changes its IP address, I think we'll notice within 10 minutes or so, I'm not sure if that immediately triggers a config-changed.
<cory_fu> EdS: From that log, it looks like you might have had several successful runs.  Did any of those actually succeed or did they get stuck?
<EdS> It always got stuck, but that may have been because of various external things.
<EdS> cory_fu: I was setting up MAAS, juju and reading lots.
<cory_fu> EdS: Odd.  If it got stuck deploying, I would have expected to see log messages about 00-deploy_done failing
<EdS> cory_fu & kjackal: :D thanks so much - that has straightened a lot out in my head!
<EdS> it's entirely possible I have cleared out the log of the failed runs, but it never felt like I truly succeeded with conjure up
<cory_fu> EdS: I do see some failures in there related to the connection to the controller failing.  That seems plausible if the machines were provisioned and not released.
<cory_fu> EdS: You asked about it saving info; as I mentioned, there shouldn't be any persistent effects if you stop before the deploy, but from the log, it looks like you went that far a few times.  Obviously, you'd have to clean up any provisioned machines or anything else that Juju or conjure-up claimed in MAAS
<EdS> cory_fu: yeah. I managed those bits. I think the difference between doing it with conjure-up and juju tricked me into thinking I'd get a similar opportunity to tweak the settings. When juju + maas worked it was all up, but now I realise with defaults, not any leftover config.
<EdS> cory_fu: I think as I'm at the early stages of this setup, I'll tear it all down and try to get the settings I wanted :)
<cory_fu> EdS: Ok.  If you end up trying conjure-up again with any MAAS issues sorted out and have any issues again, let me or stokachu know.  We're travelling, so might not respond right away, but we'd like to sort out any bugs you might run in to.
<cory_fu> But Juju direct is also entirely viable and should be just as configurable, even if it might not be presented as nicely.  (At the end of the day, conjure-up is just calling out to Juju, after all.)
<EdS> cory_fu: superb, thanks you. I'm just setting off from the start again with juju + the bundle. I think personally, the yaml is fine for me. Enjoy your travels.
<BarDweller> Hiya.. I know I had this working before.. but then I wiped that box & started again.. I'm trying to have my kubernetes (loaded via conjure-up) to use my docker registry (running on the host that did conjure up) .. I thought I used juju run-action registry to make this work before, but that seems to be for secured registries, and mine is unsecured..
<BarDweller> I found https://insights.ubuntu.com/2017/10/11/private-docker-registries-and-the-canonical-distribution-of-kubernetes/  which hints I need to set a config key .. which I think is now 'docker-opts' not 'docker-config' as in the article.
<EdS> how's the config here look? https://insights.ubuntu.com/2017/10/11/private-docker-registries-and-the-canonical-distribution-of-kubernetes/
<EdS> Tim passed me the link here the other day :)
<BarDweller> yeah.. thats the link I just pasted right ? ;p
<EdS> oh haha sorry
<EdS> docker-opts sounds familiar from recent docker versions
<BarDweller> anyways.. I've done "juju config kubernetes-worker docker-opts="--insecure-registry 192.168.1.xx:2375" .. do I also need to do the juju run-action registry step ?
<BarDweller> (because atm, if I have an image: tag in my yml for 192.168.1.xx/myimage:latest it complains getsockopt connection refused)
<EdS> not if you've already got the registry, it sounds like you have.
<BarDweller> I have a registry running at 192.168.1.xx :2375 that I can talk to, push images to, run containers on etc
<BarDweller> seems tho that my worker node can't talk to it.. I'm missing something.
<EdS> yeah. don't deploy a registry with juju run action then :)
<EdS> is your registry in the same subnet as nodes?
<tvansteenburgh> BarDweller: i would juju ssh to the node and try a docker pull from there, and see what that tells you
<tvansteenburgh> sounds like a networking issue
<BarDweller> good plan..
<BarDweller> juju ssh kubernetes-worker/0    .. and then `docker images` is showing me a different docker registry..  but from that env I can ping my other one ok.. lemme see if I change DOCKER_HOST if I can talk to my other reg from that shell
<BarDweller> yep
<BarDweller> so the kubernetes-worker/0 is capable of reaching my docker registry, and can talk to it.. but seems configured to use a different registry
<BarDweller> hmm.. do I need to do something after the juju config that tells the worker to use my registry? (restart the worker or sommat?)
<tvansteenburgh> juju config kubernetes-worker - do the docker-opts have the correct registry in that output?
<tvansteenburgh> BarDweller: when you set it via config, the charm should do everything for you
<tvansteenburgh> if it's not, that's a bug
<BarDweller> yes, juju config kubernetes-worker shows the options I put in docker-opts (--insecure-registry 192.168.1.xx:2375)
<BarDweller> anyway I can kick it to tell it to read it ?
<BarDweller> hmm.. wait up
<BarDweller> vagrant ssh in the kubernetes-worker/0 then docker info shows my registry listed in there.. digging further
<EdS> hmm. I have torn down my canonical-kubernetes setup to rebuild with a different service-cidr. This is now stuck waiting with flannel blocked :/
<EdS> I think I will try again, I have noticed the 1.7->1.8 version bump.
<BarDweller> yeah.. there's something odd here.. it's not a network issue, it's a docker config issue.. I'm trying variants atm
<BarDweller> mebbe it'd be easier if I just started using the registry from the juju charm ?
<tvansteenburgh> BarDweller: if you're just playing around that's fine but it's not a production setup
<BarDweller> yeah.. this isn't for prod, it's for local dev
<BarDweller> I just need a way to push custom images that I can load into the kube =)
<EdS> for production systems, used only internally within our company, would you consider it ok to run a registry pod/service with images stored in an nfs PV?
<EdS> I'm not sure we need or want to give docker-registry a server of it's own. It seems overkill for us.
<tvansteenburgh> BarDweller: gotcha. i'm still keen to figure out what the issue is so we can fix it if we need to
<tvansteenburgh> Eds: yes - the helm chart in that blog post is great for that
<BarDweller> yeah.. I'm still digging.. I'm not entirely sure I've got everything lined up right
<EdS> tvansteenburgh: thank you :)
<BarDweller> I know if I do `juju ssh kubernetes-worker/0` and then do `export DOCKER_HOST=192.168.1.xx:2375` and then do `docker images` that I can see my expected images
<BarDweller> so I know my docker is up, and reachable by the worker node.
<tvansteenburgh> BarDweller: okay, that's good feedback - we can try to reproduce
<BarDweller> so then I do `unset DOCKER_HOST` and then `docker info` and I note at the bottom it lists "Insecure Registries: 192.168.1.xx:2375"
<tvansteenburgh> hmm
<BarDweller> and then I try 'docker pull 192.168.1.xx:2375/my-image:latest` and it says error image not found.
<BarDweller> which is an improvement from before.. where it was saying getsockopts conn refused.
<tvansteenburgh> BarDweller: would you mind filing a bug with this info here: https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/new
<BarDweller> I had this working a few weeks back.. I have the kube yamls that say so.. but I wiped the host I'd done the magic to add the registry on.. and failed to add what I did to my vagrantfile =)
<BarDweller> mebbe my registry isn't the right thing
<knobby> I assume your docker images when you set the DOCKER_HOST shows my-image.
<knobby> I use an insecure registry hosted outside k8s and all I had to do was add that option
<BarDweller> I'm seeing people saying they can do things like http://ip:port/v2/_catalog to see images.. mine doesn't seem to like that just gives back "{"message":"page not found"}
<knobby> docker pull 192.168.1.xx/image_name just works
<knobby> are you running a version 1 registry?
<BarDweller> yes, if I set my DOCKER_HOST to be 192.168.1.xx then docker images will show my-image
<BarDweller> checking..
<BarDweller> apparently I'm running 17.09.0-ce, api version 1.32 (min ver 1.12) build date sep 26 2017
<BarDweller> (from docker version while docker host is set)
<BarDweller> I wonder if I don't have a docker registry, I just have a docker server.. #noobquestion is there a difference ?
<knobby> BarDweller: how are you running it? the docker registry is a docker container named registry
<knobby> I'm running registry:2 for example
<knobby> with DOCKER_HOST working it sounds like you're using a docker daemon instead of a registry
<BarDweller> loosely .. apt-get install -y docker-ce socat .. then update dockerd options to pass -H tcp://0.0.0.0:2375
<BarDweller> yes, that's the realisation I'm coming to (re daemon vs registry)
<knobby> ah, yep. a registry is typically on port 5000 and is run via something like `docker run -p 5000:5000 registry:2
<BarDweller> ok.. so should I change my original question to.. is there a way to have my kube-worker pull an image from my docker-daemon ?
<knobby> BarDweller: I would think it would be easier to crank up a registry myself.
<knobby> BarDweller: docker run -p 5000:5000 -e REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY=/var/lib/registry -v /my/registry/volume:/var/lib/registry registry:2
<knobby> BarDweller: something like that would do it
<BarDweller> hehe.. sounds like an idea.. I'll have a bash
<BarDweller> although you'd kinda think by this point I should just use the juju docker-registry charm
<knobby> BarDweller: if you have a machine for juju to snap up for it, sure. For me, I'm using bare metal and didn't want to waste resources on something that is used so infrequently. I also was able to put it on the nfs server, so file io was local
<BarDweller> I have all this inside a vagrant vm.. so it really doesn't make too much difference.. at the mo the vm is running the dockerd .. I'll try deploying a registry first, because that might integrate easier
<knobby> BarDweller: sounds like a good idea
<BarDweller> ouch.. I think I figured this out =)
<BarDweller> so to have docker client talk to an insecure registry, you add the --insecure-registry option to the dockerd, (or use daemon.json) however, if the docker you are using is remote, you do it to _that_ docker .. which is awesome in my case, because it means the clients of my vm wont need to care
<BarDweller> cool.. my image came up finally inside kube =) thanks for the assist =)
<knobby> BarDweller: glad to hear you go it going!
<BarDweller> yep.. I think before I had used the juju charm to deploy a registry.. but it's not clear to me how I ever had that working, because I never configured anything beyond 'domain'  and set ingress =true .. I never had all the insecure-registry stuff before
<BarDweller> anyways.. updated vagrantfile to not do that, and instead use juju config to add the insecure registry bit for the registry launched onto the docker daemon as part of the provisioning
<skay> I'm seeing "Too many arguments." during config-changed, and I can't figure out where it's coming from
<skay> I've grepped all of the juju code base
<EdS> is that bash's too many arguments?
<EdS> is something being expanded to a long list, eg ls * in a folder with many thousands of files will trigger that, IIRC
<skay> hmm, I'll have to dig around to see if anything like that is happening
<EdS> that was off the top of my head, sorry if I'm way off the mark!
<skay> :)
#juju 2017-10-14
<Fallenour> O/
<Fallenour> Upgrading my horizon broke my ability to log in. Any thoughts?
<Fallenour> o/
#juju 2017-10-15
<LikeLuc> Hello i tryed to install juju on maas and got this error: Creating Juju controller "mainmaas-controller" on mainmaas Looking for packaged Juju agent version 2.2.4 for amd64 Launching controller instance(s) on mainmaas... ERROR failed to bootstrap model: cannot start bootstrap instance: cannot run instances: cannot run instance: No available machine matches constraints: [('mem', ['3584']), ('zone', ['default']), ('agent_name', ['9481c6a9
<LikeLuc> i used this tutorial https://jujucharms.com/docs/2.1/clouds-maas
<pmatulis> LikeLuc, do your maas nodes have at least 3584 MB of memory?
<LikeLuc> yes ?
<LikeLuc> why?
<LikeLuc> pmatulis, yes because i want to test it they have about 1024 or 2048 MB
<LikeLuc> @pmatulis ty
<LikeLuc> i give him 4 gb and it works
<Gurkengewuerz_> Hey everyone. I currently testing MaaS with JUJU in Virtual Box. Maas is working well, it also commision a node. 2 Nodes are marked as ready. The Controller and the Nodes have 4GB RAM each. When i do "juju bootstrap mainmaas" the correct node is selected and got deployed. It is marked as Deployed with "Ubuntu 16.04 LTS" in the MaaS GUI. After the Node is started the following status is shown:
<Gurkengewuerz_> Waiting for address Attempting to connect to 192.168.1.125:22  But i can connect to the Node with my ssh-key using "ssh ubuntu@192.168.1.125". can anyone outside there help me?
<LikeLuc> Hey guys i have the same problem like Gurkengewuerz_ do u have any idea
<LikeLuc> ?
<mario_> problem with juju deploy
<mario_> not finish to deploy
<mario_> the charm's machines keep in state pending
<mario_> pending, allocating
#juju 2019-10-07
<thumper> babbageclunk: https://github.com/juju/juju/pull/10686 for the httpserver test race
<babbageclunk> looking
<thumper> babbageclunk: noticed that I hadn't pushed last commit
<babbageclunk> ok
<thumper> babbageclunk: good now
<babbageclunk> thumper: why'd you change the quick request loop from a timeout based to just trying 5 times? Won't that lead to more failures?
<thumper> no, because it normally catches it immediately, the first time through
<thumper> and if not the first. definitely the second
<thumper> the loop is just there for paranoia
<babbageclunk> I don't see how making it max of 5 rather than up to LongAttempt is more paranoid though
<thumper> I could rewrite it to end up at LongWait, but the attempt code is deprecated, and shouldn't be used
<thumper> it introduces double sleeps through the loop for no benefit
<babbageclunk> ah, ok
 * thumper reworks
<thumper> running tests
<babbageclunk> thumper: so I should be waiting? I might just approve with a comment on that line
<thumper> babbageclunk: pushing change now
<thumper> was looping the tests to double check
<thumper> babbageclunk: it's all there now
<babbageclunk> thumper: cool approved
<thumper> ta
<thumper> another worker updated : https://github.com/juju/juju/pull/10687
<Fallenour> Good morning everyone!
<Fallenour> Is anyone else having issues with containers deploying with juju? im still having this issue, and I either need to get this resolved, or I need to pull juju out of production entirely and explore alternative solutions.
<stickupkid> Fallenour, any more information about that?
<Fallenour> yea I keep getting DNS error messages on my lxd containers not deploying. MaaS/Juju deployment, servers deploy images just fine, image deploys just fine, and the OS can ping google.com, so its not a DNS issue. From what Ive seen its an ongoing issue from as far back as 2016, and its impacting all of juju it seems. I cant pitch a solution that doesnt work in my datacenter to someone elses.
<Fallenour> @stickupkid, http://paste.ubuntu.com/p/DBbGM7Ksdx/
<Fallenour>  for the juju status. Just rechecked by sshing into machine 0, pinging google, it works
<stickupkid> manadart, do you have any thoughts about this?
<stickupkid> "no matching image found", seems familar, but could be wrong
<Fallenour> Im testing something right now as a LDE @stickupkid
<Fallenour> Is there a way to add just a container, an empty container via juju?
<manadart> stickupkid Fallenour: 2.4.7 is quite old.
<Fallenour> I think I know what the issue might be. Its a BIG MAYBE
<Fallenour> 2.4.7 for what @manadart
<manadart> Fallenour: "juju add-machine lxd:x"
<stickupkid> Fallenour, juju
<Fallenour> Im on 2.6.9 @manadart stickupkid
<Fallenour> bionic-amd64
<Fallenour> My controllers are on 2.4.7 though I just saw. Should I update those, and if so, how?
<stickupkid> Fallenour, not according to the status output
<Fallenour> http://paste.ubuntu.com/p/kpKXyQYfYp/
<stickupkid> Fallenour, upgrading can be found here https://jaas.ai/docs/upgrading-models
<Fallenour> Just an FYI, thats how long Ive been fighting to get this to work, and even since before 2.4.7
<Fallenour> So do keep in mind, this is not from a lack of effort o.o
<Fallenour> I really have been fighting for years to get this thing to work for me. I r eally do want it in production
<stickupkid> Fallenour, i think upgrading your controllers would help, some bug fixes around this have landed since 2.4.7
<Fallenour> alright, I just ran upgrade model
<Fallenour> on the controllers
<Fallenour> its installing 2.5.8
<Fallenour> im also concurrently installing the latest images for 18.04 on bare machines, as well as the upgrade on the controllers
<Fallenour> Im gonna be so sad if it was a simple upgrade on the controllers @___@
<Fallenour> 2-3 years of fighting, all because of an update x...x
<Fallenour> @stickupkid, @manadart I upgraded the controllers, as well as ran upgrade-model on the model itself, but its still failing to install an image. I did notice that when I run juju-status that its still showing that the model is on 2.4.7. Is there anything else i need to run?
<stickupkid> Fallenour, you need to run it on the controller, `juju upgrade-mode -m controller`
<Fallenour> I ran it in the controller model
<Fallenour> @stickupkid,
<Fallenour> It shows 2.5.8 in the controller model
<Fallenour> but not in cloud-000-000001, the current model im testing with
<Fallenour> @stickupkid, @manadart I went ahead and destroyed the model and redeployed it. Its showing 2.5.8. Ill let you knwo what it does once I get the machines deployed
<stickupkid> achilleasa, nammn_de - whilst I'm fighting with a lxd container in jenkins, I've brought the shell check stuff upto parity with the older pre-check tests https://github.com/juju/juju/pull/10689
<stickupkid> achilleasa, nammn_de if anyone finds a way to speed up `go vet` I'm all ears, there is no way it should take 180s, that's crazy
<Fallenour> @stickupkid, not sure what you are building, but ramfs is a good start for dramatic performance improvements. thats one fo the things Ive been working with for a while now. ive noticed substantial performance improvements when using it, especailyl in combination with ceph with juju
<stickupkid> Fallenour, don't tempt me with ramfs :D
<Fallenour> @stickupkid, loool, its one of the things Im hoping to bottle and ship as part of the cloud Ive created, which uses juju as a machine mechanism. ceph is another major component. Once i get this working, I plan on shipping alpha for people to test. Ill probably need you guys help  with getting it setup right, but lawd if it works, its gonna be a monster.
<Fallenour> I was able to build an entire hybrid platform, complete with a centralized UI, and self-healing services, fully compliant out of the box, but its all in pieces until this works.
<stickupkid> Fallenour, yeah, you should bring it up here - https://discourse.jujucharms.com/ people would be very interested!
<Fallenour> @stickupkid, Oh I cant wait! Its just insane. ive written over 615,000+ lines of code for it so far. Its a beast.
<Fallenour> NOOOOOOOOOOOOooooooooooooooo
<Fallenour> the same issue still!
<Fallenour> D;
<Fallenour> @stickupkid, @manadart whats the current version for controllers?
<rick_h> Fallenour:  long time no see, how goes?
<Fallenour> @rick_h, ! Its been a while! Im doing amazingly well! We are borderline like...3-4 weeks of getting funded! Super awesome things happening
<Fallenour> How are you doing? How have you been?
<rick_h> Fallenour:  whoa! awesome on the funding news
<Fallenour> How are you doing? How have you been?
<Fallenour> ooops! Wrong window XD
<rick_h> Fallenour:  partying hard you know :) just plugging away at the Juju fun stuff
<rick_h> lol
<Fallenour> yea its super cool! Im really excited! OOH! Even bigger news! I got NASA to sponsor the project! I will be able to test the entire cloud platform Ive been workign on for ages with them.
<Fallenour> rick_h, Oh I wish! Ive still smashing my face to keyboard trying to get it working.
<rick_h> Fallenour:  wow, that's crazy. You run across magicaltrout in your travels there?
<rick_h> Fallenour:  ouch on the face smashing, I'd suggest less of that.
<Fallenour> rick_h, not yet, but Im expecting Ill hear from him the moment he realizes juju is coming DoD and fedspace wide.
<rick_h> never really seems to work on the good bugs anyway
<rick_h> lol
<Fallenour> rick_h, Im having issues with lxd deploying now, of all the things, images where never a problem. Its trading one fire for antoher I suppose.
<rick_h> Fallenour:  :/ that's odd. What LXD issues. I haven't seen anything in the recent fires.
 * rick_h is still catching up on weekend emails and now is scared to look at the bugs section of email
<Fallenour> rick_h, yea they think it might be a controller version issue thatll solve itself once updates are done, so Im moving from 2.4.7 to whatever current is.
<stickupkid> can we tripple check by doing "lxc launch ubuntu:18.04 bionic"
<rick_h> Fallenour:  oh ouch, yea 2.4's been a bit
<Fallenour> stickupkid, yea Im just waiting for the 2.5.8 to 2.6.X to occur. its requiring I update twice.
<stickupkid> and what does "lxc image list"
<rick_h> stickupkid:  Fallenour yea my thought exactly. Did 2.4 know bionic existed to be able to use?
<stickupkid> print out
<stickupkid> rick_h, errrr
<stickupkid> maybe
<Fallenour> rick_h, yea I was really hesistant to change anything until I got everything working, and then I was gonna merge it all together, and then upgrade. Seems like upgrade is happening now though :P
<rick_h> Fallenour:  wheeee
<Fallenour> rick_h, surprisingly, yea, it worked
<rick_h> Fallenour:  umm, congrats? :P
<Fallenour> I was running trusty, xenial, and bionic, side by side. was mazballs tbh
<rick_h> hah, well that's ok and good. We definitely support each of those LTS's
<Fallenour> rick_h, security would have had a seizure though, so definitely not recommended for prod, but it demosntrates the capacity for juju to support a dramatic difference in OS, whcih was a plus
<rick_h> Fallenour:  just have to get on that UA train and have the ESM for trusty :P
 * rick_h takes off the sales hat
<Fallenour> rick_h, loool. I was keeping it up for minecraft modules. The target for that one was gaming servers. Ill be able to move it to bionic once I start pushing modules out for software support with bionic
<Fallenour> I didnt have a chance to tell you, but I finished carousel, so now I can mass produce software support once i figure out how to build charms.
<rick_h> Fallenour:  awesome
<Fallenour> it doesnt seem to be a version problem, Ive updated everything to 2.6.9, and its still occuring @rick_h @stickupkid @manadart
<rick_h> Fallenour:  ok, what's the issue/log output?
<Fallenour> machine-0: 07:46:30 DEBUG juju.worker.logger reconfiguring logging from "<root>=DEBUG" to "<root>=WARNING;unit=DEBUG"
<Fallenour> machine-0: 07:46:30 ERROR juju.worker.dependency "broker-tracker" manifold worker returned unexpected error: no container types determined
<Fallenour> machine-0: 07:50:03 WARNING juju.container.broker no name servers supplied by provider, using host's name servers.
<Fallenour> machine-0: 07:50:03 WARNING juju.container.broker no search domains supplied by provider, using host's search domains.
<Fallenour> machine-0: 07:50:03 WARNING juju.container.broker incomplete DNS config found, discovering host's DNS config
<Fallenour> machine-0: 07:50:45 WARNING juju.worker.provisioner failed to start machine 0/lxd/0 (acquiring LXD image: no matching image found), retrying in 10s (10 more attempts)
<Fallenour> machine-0: 07:51:02 WARNING juju.container.broker no name servers supplied by provider, using host's name servers.
<Fallenour> machine-0: 07:51:02 WARNING juju.container.broker no search domains supplied by provider, using host's search domains.
<Fallenour> machine-0: 07:51:02 WARNING juju.container.broker incomplete DNS config found, discovering host's DNS config
<Fallenour> Sorry for the wall of text everyone
<rick_h> hah ok sec...processing
<rick_h> pastebin ftw :)
<Fallenour> yea I wasnt sure if itd let me pastebinit since its an ongoing write
<rick_h> Fallenour:  this is on MAAS?
<Fallenour> yea
<rick_h> Fallenour:  what version of MAAS and are the DNS servers setup in the MAAS config?
<manadart> Fallenour: Can you leave the root log level at DEBUG for a re-run, so we can see what the container manager is outputting?
<Fallenour> rick_h, its 2.4.2, and DNS is configured and working. I can ssh into a machien deployed and ping google.
<Fallenour> manadart, how would i do that/
<rick_h> Fallenour:  hmmm, this is from the machine you want the container on?
<rick_h> Fallenour:  e.g. machine 0 in this case it looks like
<Fallenour> rick_h, yea
<manadart> Fallenour: According to the first line of the output^ you have model config to use WARNING by default except for units.
<Fallenour> rick_h, when I ssh into the machines, I can ping external DNS sources, but for some reason its giving me the image cannot be foudn error. I even went out of my way tod ownload all the images just to make them available, but that didnt work. I also added additional DNS servers, to include a root dns server, but that didnt resolve the issue either.
<manadart> Fallenour: try: juju model-config logging-config="<root>=DEBUG"
<Fallenour> manadart, ok ill try that
<rick_h> Fallenour:  was the machine manually added? https://bugs.launchpad.net/juju/+bug/1821714
<mup> Bug #1821714: Container on sshprovided machine: incorrect DNS <canonical-bootstack> <juju:Triaged> <https://launchpad.net/bugs/1821714>
<Fallenour> rick_h, I have this issue whether I use juju deploy, conjure up, or manual
<Fallenour> one thing I have been looking into, is juju using snap as the default lxd or /bin/ ?
<rick_h> Fallenour:  hmm, yea I see https://bugs.launchpad.net/juju/+bug/1826203 is the same as well :/
<mup> Bug #1826203: deploy openstack base bundle failed with lxd error: incomplete DNS config found, discovering host's DNS config <debug-log> <juju-release-support> <maas> <openstack-provider> <juju:New> <https://launchpad.net/bugs/1826203>
<rick_h> Fallenour: the deb bin vs the snap though it detects if the snap is on the system
<Fallenour> my thoughts are if the lxd container system use isnt matching, it would fail
<Fallenour> rick_h, ok, so that wouldnt matter to juju?
<rick_h> Fallenour: I don't think so
<Fallenour> rick_h, I didnt either, but Im willing to try it.
 * rick_h has to take the dog to the vet, will look when I get back.
<rick_h> Fallenour:  it might be worth hopping onto https://bugs.launchpad.net/juju/+bug/1826203 as that seems the same vein and means it's not just you :)
<mup> Bug #1826203: deploy openstack base bundle failed with lxd error: incomplete DNS config found, discovering host's DNS config <debug-log> <juju-release-support> <maas> <openstack-provider> <juju:New> <https://launchpad.net/bugs/1826203>
<rick_h> Fallenour:  do you have any http proxies or anything in play in the controller/model?
<Fallenour> rick_h, not that Im aware
<Fallenour> Im currently playing with wiping out the dns config entirely, and seeing what happens
<Fallenour> building a new machine, whcih will deplyo the new config. Will see what happens @rick_h
<rick_h> Fallenour:  ok, still looking
<Fallenour> if everything works, and I get this working, like fully working, i expect free tacos & energy drinks for life from Canonical
<Fallenour> and some kinda plushy animal. Suse gave me an iguana o.o
<Fallenour> rick_h, well...at least I know that its not maas now
<rick_h> Fallenour:  ?
<Fallenour> rick_h, unless...does juju store the dns configurations for the model, or per machine when using maas?
<rick_h> Fallenour:  so the thing is that I don't think Juju deals with DNS and just relies on the machines at play.
<rick_h> Fallenour:  e.g. dhcp/etc
<Fallenour> rick_h, wiped the dns config entirely from maas, so its not a bad config from maas. maas is fully up to date.
<rick_h> Fallenour:  I'm guessing there's something up wtih the dhcp setup and dns but not sure
<Fallenour> rick_h, thats the thing though,t he machines can ping externally
<Fallenour> rick_h, if its dns, it doesnt make much sense, becuase it can ping dns names, and resolve them.
<Fallenour> rick_h, whats the address of the system it gets images from?
<rick_h> Fallenour:  right, but according to https://github.com/juju/juju/blob/085584f255f6d66530da25947544ca33418d0675/container/broker/broker.go#L76 we look at the interfaces on the host
<Fallenour> rick_h, which is what doesnt make sense. The host can reach the internet and resolve addresses
<Fallenour> rick_h, whtas the dns name of the source it draws the images from?
<rick_h> Fallenour: I *think* https://us.images.linuxcontainers.org/
<Fallenour> rick_h, OOOO
<Fallenour> OOOOH BOIII!
<Fallenour> I found something here!
<rick_h> Fallenour:  I'm not going to get my hopes up yet...
<Fallenour> rick_h, it cant ping the address for the images
<rick_h> Fallenour:  :(
<Fallenour> rick_h, it cna ping ubuntu though. hmmm
<Fallenour> rick_h, and it can pull down packages
<rick_h> Fallenour:  https vs non https?
<Fallenour> rick_h, stickupkid manadart is there a command I can use to test to see if I can download an image?
<rick_h> Fallenour:  lxc launch
<rick_h> Fallenour:  from that host machine
<Fallenour> rick_h, I tried building a container from the local machine, but its failing
<rick_h> Fallenour:  there's your problem?
<Fallenour> rick_h, yea Im thinking its something to do with reaching the images
<rick_h> if you can't do it Juju certainly can't
<Fallenour> rick_h, mmmmm
<Fallenour> rick_h, have you guys seen this issue before? this doesnt make any sense logically
<rick_h> Fallenour:  no, my brain is hurting trying to think it through
<stickupkid> lxc launch ubuntu:18.04 bionic
<Fallenour> if I can ping the parent domain, it doesnt make sense that I could....FIREWALL PFSENSE!!!
<rick_h> Fallenour:  I'm getting a setup going on my maas to see if I can run logs and see where it falls over here
<stickupkid> that's the exact command
<rick_h> Fallenour:  is there's pfsense on the maas nodes?
<Fallenour> rick_h, no, but there is a firewall inline
<Fallenour> rick_h, im testing that now by disabling all block ru...no that isnt it either.
<Fallenour> rick_h, I disabled all the block rules
<manadart> Fallenour: Does resolve.conf on the MAAS hosts have the MAAS server as the nameserver?
<Fallenour> manadart, yea it says search maas
<Fallenour> rick_h, manadart which if Im not mistaken, if the gateway is the firewall, would forward it to the firewall if it doesnt know it right?
<Fallenour> rick_h, manadart would...would ubuntu be blocking the request because its behind a firewall? because its port is 443. ssl issue?
<Fallenour> rick_h, I have an idea
<Fallenour> btw I just want to say in advance I really do appreciate you guys and whta you do for the community
<Fallenour> rick_h, its gotta be something inline
<Fallenour> rick_h, I just pulled down an image after testing to see if Ic na pull it down inline, and it failed, then unplugged, went wireless, and it worked
<rick_h> Fallenour:  yea, it's hard because firewalls like to drop things without ack/etc so you don't know how to blame
<Fallenour> rick_h, manadart ok, so we know its inline, but whats the best way to troubleshoot which device it is? theres only two things in line that could do this, the maas system, and the firewall.
<Fallenour> rick_h, the thing with the firewall though is I disabled the block rules
<rick_h> Fallenour: check the logs on those boxes looking for someone running a firewall and dropping the requests
<Fallenour> rick_h, i run both boxes. Whats the best way(s) to do that?
<Fallenour> rick_h, should I start with the firewall, pfsense, or the maas box?
<Fallenour> both are latest and up to date.
<Fallenour> rick_h, man this is something...else? Ive never seen this kind of problem before. im in the logs now, and its not even showing the IP address for anything ubuntu related.
<rick_h> Fallenour:  but the url is linuxcontainers.org vs ubuntu
<Fallenour> rick_h, but its trying to resolve to cloud-images.ubuntu.com:443
<Fallenour> rick_h, does it hit that address, and then pull from somewhere else?
<rick_h> Fallenour:  oh maybe that's good then.
<Fallenour> rick_h, ok, I need to make this really noisy so I see it for sure
<rick_h> Fallenour:  maybe see if you can create a container on the maas host?
<Fallenour> rick_h, ok so now this really doesnt make sense. Im on the same network, behind the same infra, and one system allows me to pull, and the other doesnt?
<Fallenour> rick_h, but none of the infra is specifically isolated. Where does the metadata come from for the images for lxd?
<Fallenour> rick_h, manadart ok so this is just weird. Maas wont let me test it. it keep saying lxdbr0 exists, but it doesnt? so I cant use sudo lxd init --auto to test
<rick_h> Fallenour:  ? on the maas node?
<rick_h> sorry, maas host I should say
<fallenour1> @rick_h, stickupkid manadart ok lets try this again
<fallenour1> OOOOOOOOOOOOOOOooooooooooooooooOOOOOOOOOOOOO!!!!!!!!!!!
<fallenour1> GOOOOOOOALLL!!!
<rick_h> fallenour1:  ?! found it?
<fallenour1> rick_h, DAMN YOU SNORT! I DID!
<manadart> fallenour1: \o/ So what is it.
 * rick_h drumrolls
<fallenour1> rick_h, It was snortc2 rule set that was blocking a lot of things unreasonably
<fallenour1> rick_h, So waht I did collectively: updated juju from 2.4.7 > 2.6.X, Updated maas packages, wiped DNS setting in maas, removed squid, squid proxy, snort, and pfblockerng rules, and it works!
<rick_h> fallenour1:  lol
<rick_h> fallenour1:  "small tweaks"
<fallenour1> rick_h, LOL
<fallenour1> rick_h, alright! lets DO THIS!
<fallenour1> ooo you GOTTA BE KIDDING ME
<fallenour1> power went out Q____Q
<pmatulis> rick_h, here it is: https://bugs.launchpad.net/juju/+bug/1847128
<mup> Bug #1847128: [2.7] ceph-osd stuck in "agent initializing" <juju:New> <https://launchpad.net/bugs/1847128>
<rick_h> pmatulis:  ty
<rick_h> hml:  did that help any with the model-config test?
<hml> rick_h: iâm in debugger land, trying to understand whatâs going wrong.  helped to point in a better direction.
<rick_h> hml:  ok, let me know if you could use extra eyeballs
<hml> rick_h: rgr
<bdx> heyya
<bdx> when I add `series: kubernetes` to my bundle, it does not want to deploy
<bdx> this bundle https://api.jujucharms.com/charmstore/v5/~omnivector/bundle/slurm-core-k8s-2/archive/bundle.yaml
<bdx> give me https://paste.ubuntu.com/p/Bh6Hwy54zq/
<bdx> but the bundle shows the k8s tag in the charmstore, all looks good there
<bdx> to get the k8s bundle to deploy I need to change `series: kubernetes` to `bundle: kubernetes` liek so https://paste.ubuntu.com/p/WQ7xmF4Gw7/
<bdx> after I make this change the bundle can deploy, but doesnt show the k8s tag in the charmstore
<bdx> are people aleady aware of this?
<thumper> series: kubernetes is just wrong
<thumper> because kubernetes isn't a series
<thumper> the bundle: keyword was added to handle this
<thumper> if the charmstore is parsing 'series: kubernetes' it is doing it wrong
#juju 2019-10-08
<bdx> yeah, check out the osm bundle it uses `bundle: kubernetes` and doesn't get a kubernetes tag in the charmstore https://jaas.ai/osm
<bdx> and here, my bundle uses it incorrectly (non-deployable bundle) and get the kubernetes tag https://jaas.ai/u/omnivector/slurm-core-k8s/bundle/3
<bdx> https://github.com/juju/charmstore/issues/887
<babbageclunk> wallyworld / hpidcock / kelvinliu could I get a review of this? https://github.com/juju/juju/pull/10692 (model-defaults test fix)
<babbageclunk> also this one is some assess tweaks https://github.com/juju/juju/pull/10693
<wallyworld> ok
<wallyworld> babbageclunk: lgtm, thanks for the fix
<babbageclunk> thanks!
<babbageclunk> damn - meant to put them against 2.6
<babbageclunk> hang on, cancelling the merge on the test fix so I can retarget
<wallyworld> babbageclunk: we ain't gonna do any more 2.6 (touch wood) to don't feel like it's a requirement to land there firxt
<babbageclunk> I just figured may as well fix it in both vaguely current branches
<thumper> thread logger through apicaller worker: https://github.com/juju/juju/pull/10694
<thumper> hpidcock: did you want to talk about bug 1847084?
<mup> Bug #1847084: Juju k8s controller is not getting configuration parameters correctly <juju:Incomplete> <https://launchpad.net/bugs/1847084>
<thumper> there is history there
<thumper> hpidcock: nm, replied to the bug
<thumper> hpidcock: the issue is --config vs. --model-defaults, they were setting proxies in the config, adding a model then wondering why the proxies weren't there
<hpidcock> I already talked with wallyworld about it. I just wanted to gather a bit more information about what we should test it on with a fix. Since there is like 20 different CNI plugins for kubernetes that could affect how this works
<thumper> this isn't a network issue
<thumper> this is a model config issue
<thumper> hmm...
 * thumper is rereading the bug...
<hpidcock> the model config is fine, it's not passing the proxy env vars to the controller pod
<hpidcock> but I wanted to understand the environment they are expecting this to work on so I can test it. Because the proxy configuration especially the no-proxy ranges could affect the in cluster/container networking
<thumper> ok
<thumper> I think perhaps there is an issue with them setting no-proxy not juju-no-proxy
<thumper> hpidcock: I guess I'm not sure where they expect the proxy to be set
<thumper> a bit weird that the apt worked but curl didn't...
<hpidcock> I might be just assuming too much here, but I think they want the pod to have the proxy env vars set. So each unit in the model will have proxy env vars injected via the pod spec.
<hpidcock> yeah not sure about that
<thumper> hpidcock: I think you may well be right
<thumper> but I'd also heck the no-proxy...
<thumper> and also their expecation that --config is passed on to models is wrong
<thumper> so there are a bunch of not-right thinking there
<hpidcock> I don't think we have a way to automatically populate no-proxy, since we might not know the pod networking cidrs
<hpidcock> but it seems no-proxy ranges should be for all container networking, so anything in-cluster shouldn't go via the proxy
<thumper> I agree
<thumper> not many things handle a cidr no-proxy
<hpidcock> proxy configuration at application level is also a bit weird inside of k8s, not very idiomatic way to setup stuff. Normally you would use something like Isitio to handle any proxy configuration, but I can understand setting the apt-proxy
<hpidcock> w
<hpidcock> wallyworld: I wasn't able to repro the 1847128 volume bug using 2.7 develop head
<wallyworld> hpidcock: hmmm, ok, i'd update the bug with what you did and possibly ask for exact repro steps and make as incomplete in the interim
<thumper> babbageclunk: easy review? https://github.com/juju/juju/pull/10694
<babbageclunk> sure
<kelvinliu> wallyworld: got a min to discuss add-k8s cmd?
<wallyworld> kelvinliu: sure, give me 2 minutes
<kelvinliu> yup
<babbageclunk> thumper: approved
<thumper> babbageclunk: ta
<wallyworld> kelvinliu: free now
<kelvinliu> wallyworld: stdup?
<wallyworld> yup
<babbageclunk> bah, is anyone familiar with decrypting ssl/tls in wireshark?
<hpidcock> babbageclunk: it's a bit fiddly
<hpidcock> you need to add the private key I believe in the settings
<babbageclunk> hpidcock: yeah, I'm beginning to realise - the bit about needing to capture the handshake is what's tripping me up now
<babbageclunk> I think I've got all the private keys added
<hpidcock> I think they just mean it needs to be a new connection
<hpidcock> like you can't decrypt an already established connection
<babbageclunk> yeah, so I need to kill the controllers, start tcpdump, start the controllers again
<hpidcock> babbageclunk: https://sharkfesteurope.wireshark.org/assets/presentations17eu/15.pdf
<hpidcock> might not be possible if its an ECDHE session
<hpidcock> see slide 15 "Ephemeral (Elliptic Curve) Diffie-Hellman (ECDHE)"
<hpidcock> babbageclunk: https://golang.org/src/crypto/tls/cipher_suites.go#L77 looks like your SOL without doing some MITM proxy or something else, you could probably force it to use TLS_RSA_WITH_AES_128_GCM_SHA256 if your in the mood to recompile
<babbageclunk> hpidcock: oh right, because the controllers will use the top ones for their connections so it'll be ECDHE
<babbageclunk> hpidcock: it's probably not that important - was hoping to distinguish between different traffic types going to 17070, thanks for the pointers though
<wallyworld> kelvinliu: free again?
<sou> Hey, I want to  some maintenance on a host machine which runs openstack respective juju units. Is there any way to migrate the existing units created by juju from one host machine to another?
<magicaltrout> sou: depends what you're wanting to migrate i guess, you could `juju add-unit blah` to create a new one of the exsiting application
<magicaltrout> then juju remove-unit to shutdown the old one when they are sync'd up
<kelvinliu> wallyworld: back now
<kelvinliu> sorry, just finished lunch
<wallyworld> kelvinliu: no worries, standup?
<kelvinliu> yes
<sou> This is wrt easyrsa unit. There is only one unit in the setup.
<sou> Easyrsa serves as the CA for certs generated by etcd
<sou> I had to take down the host which runs easyrsa unit for maintenance (I had to reinstall the host machine). Though when it came up, a new easyrsa unit was added. But the etcd cluster was broken
<sou> So I was figuring out a plan to securely reinstall the host machine which runs easyRSA
<sou> unit
<sou> One of the points which came to my mind is to backup the volume used by easyrsa, and then when the unit is recreated restore the backup
<kelvinliu> wallyworld: we can't do precheck on podspec in deploy facade, because at that time, no podspec yet..
<wallyworld> derp
<wallyworld> just have to check and error out later then
<kelvinliu> ok, so no need to do this check, coz we do all these in ensureXXResources already.
<shann> Hi
<shann> i have a question about juju, we can add unit to application, but in case of one of unit not response, juju redeploy them ?
<manadart> shann: You can "juju remove-unit app/x" and "juju add-unit app".
<shann> thanks manadart, i see also doc about ha-controller and ha-applications
<nammn_de> manadart stickupkid a small pr with some tests https://github.com/juju/juju/pull/10696 if someone want to take a look
<manadart> nammn_de: Looking.
<Fallenour> OH MY GAWD WHAT A BLESSED DAY!
<Fallenour> Openstack is written in Django!
<Fallenour> 8D
<Fallenour> <3
<Fallenour> you guys are amazing o.o
<Fallenour> Hey manadart stickupkid rick_h do you guys know which region is chosen by default for openstack?
<stickupkid> Fallenour, i don't unforunately
<stickupkid> nammn_de, i've approved, you won't be able to land, until my CMR branch does
<Fallenour> I figured it out @stickupkid @manadart @rick_h its admin_domain. Also, the command for recovering the password is: juju run --unit keystone/0 leader-get admin_passwd
<Fallenour> the username is admin.
<Fallenour> My gift to the juju community <3
<Fallenour> That officially makes me a juju contributer for the openstack module XD
<Fallenour> seriously though, that should be added to the official docs on openstack-base-61
<Fallenour> next question: rick_h manadart stickupkid I have 5 OSDs per compute/storage node set, juju status shows 5 active, openstack only sees 1 drive each. thoughts?
<nammn_de> thanks manadart stickupkid i'll wait for your branch then and merge after
<rick_h> Fallenour:  not sure, this falls under openstack expertise I don't have tbh.
<rick_h> Fallenour:  have to bug icey and company on that one
<Fallenour> rick_h, WUT! Something you dont know!? The apocalypse o.o
<rick_h> Fallenour:  I know, I hang my head in shame
<Fallenour> rick_h, tis truly a sadness day :( its ok though! Openstack is up and running, and that is good enough. with ssl enabled, I should be able to access sessions over the web, and I do vaguely remember solving this issue in the past, so im sure I can do it again.
<Fallenour> rick_h, In other news, my boss didnt like the work I did to create a centralized UI for controlling all of our systesm and services at work, which means the company has officially rejected it. This also means that it remains solely as mine now.
<Fallenour> rick_h, that being said, it means I can contribute up to 615,000+ lines of code to whatever I see fit.
<Fallenour> rick_h, now that I know that openstack is built in django, that means I can contribute all of the code to the openstack juju charm suite, if the team will have it.
<icey> Fallenour: OpenStack isn't build in django, the openstack dashboard is though
<icey> Fallenour: we're generally over in #openstack-charms if you want to hang out with the cool kids ;-)
<Fallenour> icey, yeap! and the rest is built in python ;), which the app I built is completely and utterly designed to natively integrate with
<Fallenour> icey, horizon is the only app that matters O.o, all the rest are just...core services o.o
<Fallenour> XD
<Fallenour> icey, it wont let me join :(
<Fallenour> icey, I got in
<icey> Fallenour: let me guess, you had to register your nick :-P
<Fallenour> lol yea icey I thought I had already signed in, but im guessing registration sign ins time out
<stickupkid> nammn_de, right, I've fixed the issue around landing PRs, you should be able to merge yours now
<nammn_de> stickupkid: cool ð¦¸
<jam> manadart: can you rubberstamp the rebase onto develop https://github.com/juju/juju/pull/10698
<manadart> https://media2.giphy.com/media/xT4Aphm45GMfpVEUxO/giphy.gif?cid=790b7611dfff702a60cf90c310d9d75147cd11c9ad8af327&rid=giphy.gif
<nammn_de> rick_h: https://github.com/juju/juju/pull/10685 should force all bootstraping to be lower case
<rick_h> nammn_de:  getting back and looking
<rick_h> nammn_de:  I think if it's a user name then yes, however we can't force it for things that exist in the world like clouds and regions on those clouds that are outside our control
<nammn_de> you mean regions should stick to being camelcase in case they are? Arent we then back to the initial kube problem?
<pmatulis> rick_h, fyi hpidcock cannot reproduce my ceph-osd issue but i can, consistently (can make it break and can make it work)
<rick_h> nammn_de: sorry, so for the controller name we create it's fine to lowercase it
<rick_h> nammn_de:  but we have to be careful we don't pass that as a new "region name" value to the API for where to request an VM from
<rick_h> nammn_de:  because the underlying cloud might be case dependent
<rick_h> nammn_de:  the bug was that juju creates a name for the controller, and that can be lowered just fine
<rick_h> nammn_de:  let me know if you want to HO high bandwidth to make sure we're on the same page
<nammn_de> rick_h: lets ho, better safe than sorry :D
<nammn_de> rick_h: gimme a ping if you can join HO
<rick_h> nammn_de:  k, omw
<rick_h> hopping into daily
<magicaltrout> gaa. when your mac pops up a notification where you see someone saying something about a charm not working, then can find no trace of the message anywhere on your laptop or online.... ð¡
<rick_h> magicaltrout:  hah yea, "was it IRC? no... Telegram? no... Email? no...wtf!!!"
<magicaltrout> i know i've literally looked everywhere, some chinese student and thats all I know
<magicaltrout> grr
<magicaltrout> oh well whoever it was wasn't lying... i have broken it =/
<rick_h> :(
<rick_h> "the more you know" I guess
<magicaltrout> well if every bug report was some transient osx notification my life would probably be a lot less stressful as i'd ignore them unless there was a useful description in the first 10 words ;)
<magicaltrout> so this chap got lucky ;)
<magicaltrout> also, if you follow that plan, if you're not looking at your screen when the notification lands, it never happened ;)
<aisrael> cory_fu: We're looking at implementing add/list/update clouds (https://github.com/juju/python-libjuju/blob/master/juju/juju.py). Do you foresee any issues? The goal is to be able to replicate the functionality of add-k8s, et al.
<cory_fu> aisrael: Well, the main caveat is that there is now a distinction between cloud info stored locally in the clouds.yaml file vs the cloud data actually registered with the controller.  I don't actually think that class / file has any relevance any more and should probably be removed.
<cory_fu> aisrael: Instead, you probably care about the clouds in the controller, which should be accessed via the Controller class.
<aisrael> cory_fu: okay, noted. Just making sure this can work via api remote from the bootstrapped machine
<cory_fu> aisrael: Yeah, it can.  Just use the CloudFacade methods, e.g. AddCloud: https://github.com/juju/python-libjuju/blob/master/juju/client/_client5.py#L1342
<cory_fu> aisrael: If you're not already familiar, https://pythonlibjuju.readthedocs.io/en/latest/upstream-updates/index.html#integrating-into-the-object-layer has tips on how to use the Juju CLI to see what calls need to be implemented and what kind of data they need to be passed
<aisrael> cory_fu: Perfect, thanks. And nice, I didn't know about that.
<cory_fu> aisrael: Also, libjuju is now officially maintained by the Juju core team, so they should be familiar enough with the Python code-base to help.  You can check the PR or commit history to see who has actively worked on it.
<rick_h> cory_fu:  aisrael right, stickupkid and I can help review/QA and such. I know stickupkid was consulted earlier today around pre-imp ideas and the like
<aisrael> rick_h: much appreciated! It sounds like David's already making progress, thanks to stickupkid
<bdx> hello
<rick_h> hey bdx
<rick_h> how goes things out west?
<bdx> battles
<bdx> lol
<rick_h> wheeee
 * rick_h collects his chain mail
<bdx> I have a peer relation that is giving me "ERROR permission denied"
<bdx> rick_h: ever heard of this?
<rick_h> bdx:  hmmm, in the unit log?
<bdx> yeah https://paste.ubuntu.com/p/2rHnhYJKNz/
<rick_h> bdx:  https://bugs.launchpad.net/juju/+bug/1818230 looks like yes...
<mup> Bug #1818230: k8s charm fails to access peer relation in peer relation-joined hook <juju:Triaged by wallyworld> <https://launchpad.net/bugs/1818230>
<bdx> thats it
<rick_h> bdx:  but honestly no hadn't run across it. Thinking...
<bdx> looks like that was targeting 2.5.9
<bdx> Im running 2.7beta1
<bdx> geh
<rick_h> yea, filed back in march and seems like it never went anywhere
<rick_h> stub:  did you find anything re: that or did that just get dropped? ^
<rick_h> bdx:  is your charm also a k8s charm?
<bdx> yeah
<rick_h> ok, it's really unclear what permission is at issue here...
<magicaltrout> chown -R root:root / && chmod -R 777 /
<rick_h> magicaltrout:  hah, "when all else fails"
<magicaltrout> i've got all the best hacks
<bdx> may as well close that bug
<bdx> magicaltrout, great work here
<magicaltrout> lol
<rick_h> bdx:  lol
<rick_h> bdx:  are you setting a pod-spec in this hook?
<rick_h> bdx:  looking at http://bit.ly/322LBrG there's a bunch of tests with that error around setting a pod spec without values
<bdx> https://github.com/omnivector-solutions/layer-slurmd-k8s/blob/7d6987a8a0b186486ad08854e6c12a60977ea3b5/src/reactive/slurmd_k8s.py#L130,L153
<bdx> oh no
<rick_h> oh?
<bdx> this is not good
<rick_h> does this code line up to what's running? the line before the permissions denied is the @when('sulrmd.initialized')?
 * rick_h walks back and away slowly...
<magicaltrout> throw it all away and crack out terraform!
<bdx> lol
<rick_h> I hear that has no bugs and never does anything bad ;)
<rick_h> bdx:  so what's bad?
<bdx> rick_h: from the charm code, nothing is gating the peer relation handler from running  except being the leader and the relation.join
<rick_h> bdx:  yea, I get that
<rick_h> but looking at the log and what code is being impacted the line before is odd that it's just the @when('slurmd.initialized')
<bdx> so like ... I'm guessing the only way it could run before the @when('slurmd.initialized'), is if multiple units are deployed simultaneously
<bdx> maybe
<bdx> but yeah, from the log, you are right, that it comes last
<rick_h> bdx:  can we put some debug output to see which line is causing the permission denied?
<bdx> yeah
<rick_h> I assume it must be the interating over peer._data but that seems odd so I'm questioning assumptions that this is what's running
<bdx> root=<trace>
<bdx> ?
<bdx> unit=INFO
<rick_h> I more meant updating that get_slurmd_peers.... with some print("got 1")
<rick_h> and 2 and 3 and see if we can tell right where it goes boom
<bdx> gotcha, perfect, on it
<rick_h> ty
 * rick_h refills coffee while you do that
<thumper> morning team
<rick_h> morning thumper
<thumper> https://github.com/juju/juju/pull/10702 for more logger threading
<bdx> rick_h: I set a log("DEBUG STAGE #") statement on every other line in that handler
<bdx> the traceback is getting thrown before any code in the handler executes
<bdx> not seeing any of the debug statements
<magicaltrout> cmars: whats the deal with your kafka charm?
<rick_h> bdx:  ok, so that's good/bad
<magicaltrout> I'm updating all the big data charms, but don't really wanna update the kafka one with the bigtop version as its moves slow compared to the upstream stuff, which is fine for hadoop etc but not as much for kafka
<magicaltrout> i'm considering forking zookeeper into a non bigtop version also just cause you don't need all the crud for zk
<bdx> rick_h: do you think increasing the verbosity of of the unit log would be helpful here?
<bdx> well I bunped it up, doesn
<cmars> magicaltrout: hey! we're using it in production for an internal project. it might need some work to support other use cases. currently requires tls certs for clients, for example
<bdx> t show anything useful around this error
<cmars> magicaltrout: we've also forked zookeeper
<cmars> magicaltrout: https://github.com/cloud-green/zookeeper-snap-charm
<cmars> magicaltrout: the kafka charm is https://github.com/cloud-green/kafka-snap-charm
<cmars> magicaltrout: we have a few other charms under cloud-green, a jmx-exporter to prometheus, and a cert-manager to help organize client certs for the kafka clients
<magicaltrout> thats cool cmars!
<magicaltrout> if we could make tls certs optional that would be good, but the cert-manager and tls support is a cool feature that is a pain to setup
<cmars> magicaltrout: are you using kafka streams at all?
<cmars> we've found it pretty nice for our use cases so far.. but there's a learning curve
<hpidcock> thumper: https://github.com/juju/juju/pull/10702 LGTM
<thumper> hpidcock: ta
<bdx> rick_h: possibly a good test would be to take out the peer relation code and see if the issue persists when adding a second unit
<bdx> I've commented out all charm code pertaining to the peer relation, running a deploy now
<bdx> commenting out all peer relation charm code allowed the second unit to deploy successfully https://paste.ubuntu.com/p/fNPGgmGbMR/
<bdx> now, possibly I should add back in bits of the peer relation to see where it breaks
<bdx> alright, I have somewhat of a data point
<bdx> when the only peer relation code was that in metadata.yaml https://github.com/omnivector-solutions/layer-slurmd-k8s/commit/9f39b24b32d94c17a983b61ba513eb12abb1694a
<bdx> adding a second unit of the application worked, there was a warning message in the log WARNING unit.slurmd/1.juju-log slurmctld:3: No RelationFactory found in relations.slurmd.peers
<bdx> but no relation-get error, everything successfully deployed with no hook errors
<bdx> the next step I took was to add back the relation factory to peers.py
<bdx> which is where it broke again
<bdx> https://github.com/omnivector-solutions/layer-slurmd-k8s/commit/71455dcebfb793f706bd76d57a5b3a420ba74d5f
<bdx> the charm handler for the peer relation is still commented out https://github.com/omnivector-solutions/layer-slurmd-k8s/blob/debug_peer_relation/src/reactive/slurmd_k8s.py#L130,L156
<bdx> the errors happens with just ^
<bdx> I'll add this to the bug, srry for spamming
<bdx> ok, here's a slightly more coherent version of this rambling https://bugs.launchpad.net/juju/+bug/1818230/comments/3
<mup> Bug #1818230: k8s charm fails to access peer relation in peer relation-joined hook <juju:Triaged by wallyworld> <https://launchpad.net/bugs/1818230>
<wallyworld> bdx: thanks for all the input, we'll take a look
<bdx> thx
#juju 2019-10-09
<wallyworld> thumper: what did we resolve with respect to printing stuff to stderr where the output format was yaml. typically i think we currently have what's printed to screen is just yaml, but in theory if we printed stuff to stderr, the user could still redirect stdout somewhere to get the pure yaml. or do we want to stick with what i think we have now, which is anything printed to the terminal is just yaml
<thumper> I think the answer was complicated...
 * thumper is in the middle of emails just now
<wallyworld> ok, let me know if you have a minute to discuss. calling an action, we might still want to print progress logs to stderr  even if format is yaml
<thumper> I think our general answer was "if you specify a format, whether yaml or json, you get valid yaml or json on stdout"
<thumper> why print logs to stderr if they are asking for yaml output?
<thumper> why not have a stderr section in the yaml?
<wallyworld> we do
<wallyworld> but
<wallyworld> we also want to show progress as it happens
<wallyworld> so the experience is the same waiting for the action to run regardless of format
<thumper> I can get on board with the progression messages being stderr
<wallyworld> i am thinking yes also but wanted to get a +1 since it's a bit different
<wallyworld> i guess it's easy to change it we get -ve feedback
 * thumper nods
<stub> The general use case is people want free form plain text output from actions, so worth optimising for that. YAML is very occasionally more conventient, and rare it actually needs to be machine readable. And even in those cases, its not always suitable as IIRC the existing API only allows simple strings strings as values.
<stub> (and the keys have limitations too, such as not being able to use filenames because they tend to contain '.' characters)
<thumper> wallyworld: https://bugs.launchpad.net/juju/+bug/1847084/comments/5
<mup> Bug #1847084: Juju k8s controller is not getting configuration parameters correctly <juju:Incomplete by hpidcock> <https://launchpad.net/bugs/1847084>
<stub> Maybe the action should choose between structured output (action-set), and unstructured (whatever I spit out)
<thumper> stub: yes, we are optimising for plain freeform output for action execution
<thumper> but we still need to think of the edge cases
<wallyworld> my thnking is that if we are providing a feedback mechanism for long running actions via action-log, you'd want to see that regardess of chosen output format
<wallyworld> we do support structured vs unstructured too
<wallyworld> stdout and stderr have their own fields in the yaml
<wallyworld> and we also now print the action stdout and stderr when running with "plain" format
<wallyworld> as well as the structured
<stub> Yes, that would be useful. My knee jerk thought would be action-log ends up on stderr with timestamps and maybe severity, and output ends up on stdout or wherever -o filename put it
<wallyworld> yup
<wallyworld> that's how it will work
<wallyworld> this is happenong in 2.7 edge
<wallyworld> with "juju-v3" feature flag
<wallyworld> feature flag gets the new "juju call" syntax
<wallyworld> but run-action also benefits from the improved output
<stub> If the action was able to choose its mode (via a hook environment tool), you might be able to make the transition seamless without a flag
<wallyworld> the flag just determines the juju cli syntax
<wallyworld> all actions benefit from the newer output
<wallyworld> the yaml is the same but the plain output is improved
<wallyworld> all actions also get a numeric id rather than a uuid
<stub> Just thinking it is pointless the operator needing to specify --format=text for actions designed to spit out text or --format=yaml for actions designed to spit out structured data
<thumper> ugh... this is a breaking change in a point release...
<thumper> but I can't think of a way to have any compatibility...
<wallyworld> thumper: breaking change?
<wallyworld> ids are opaque
<wallyworld> nothing should depend on their format
<thumper> the id of the action is part of the interface
<thumper> but they do
<wallyworld> they are just a blob
<thumper> and they have been uuids
<thumper> and some tests check for uuids
<thumper> I know there are openstack ones
<wallyworld> sigh, that's bad behaviour then
<wallyworld> ids should be opaque
<thumper> yes
<thumper> they should
<wallyworld> so let's go with that then
<thumper> but the when the docs said "id is a uuid", acceptance tests were written to ensure this
<wallyworld> easy to change some tests
<wallyworld> i already updated the juju ones
 * thumper nods
<wallyworld> and the only bad feedback so far has been due to a different bug
<wallyworld> feedback from openstack folks
<wallyworld> where we sometimes leave out stdout in the yaml
<wallyworld> that's just a straight bug unrelated to action ids or such
 * thumper nods
<wallyworld> stub: with --format, we don't really know what an action may or may not write to stdout etc (in addition to producing with action-set) so we display both. for plain output we print nicely, for yaml, it's as fields in the yaml doc
<stub> Right. The action can tell you what it is going to do though
<wallyworld> it can but who's to say an action won't do both
<wallyworld> if there's no stdout we don't print it
<stub> An action can tell you that too if you let it
<wallyworld> and it there's no structured data we don't print that
<wallyworld> but people have raised bugs if a field in the yaml is missing regardless of f it's empty
<wallyworld> so in the yaml we're thinking well include all fields
<stub> I'm not too fussed on including all fields, as personally I'd consider it legacy behavior. If I want YAML output, I'll format and spit it out myself, avoiding the limitations JUJU imposes on structured output (keys with a restricted charset, values only being strings)
<wallyworld> +1
<wallyworld> thumper: i like your bug comment, let's see what happens from it. however, their main issue (one of them) is with the controller itself as it can't reach the charm store to download a charm
<thumper> well doesn't the controller have the proxies set internally?
<wallyworld> we don't do anything different for k8s vs iaas as far as i know. what do we do for iaas
<wallyworld> but for k8s, i think we would need to pass in thje proxies via pod env vars
<wallyworld> for the controller
<thumper> is the controller in k8s or iaas?
<wallyworld> in their case? i assume it's all k8s? not sure
<thumper> does a k8s controller have a proxy updater worker?
<thumper> as long as it does, the internal proxies are set
 * wallyworld checks
<thumper> as long as there isn't any caas specific behaviour in the proxy updater worker
<wallyworld> thumper: no, that's currently in the iaas only worker list
<thumper> well... that seems like a bug
<wallyworld> yeah, seems so
<wallyworld> we were initially quite selective about what we ran up in a caas controller
<thumper> also... k8s controller for the client?
<wallyworld> and networking/proxies we wanted to be cautious about
<wallyworld> what do you mean "k8s controller for the client"?
<thumper> it was an exclamation but with limited info due to public channel :)
<hpidcock> wallyworld: `juju bootstrap --logging-config "<juju.worker.diskmanager>=TRACE"` is that how you enable trace logging on a specific module?
<wallyworld> --logging-config "<root>=INFO;juju.worker.diskmanager=TRACE"
<wallyworld> only <root> needs <>
<hpidcock> awesome thankyou
<wallyworld> ; separator
<wallyworld> can add as many packages as you want
<hpidcock> timClicks: RE: https://bugs.launchpad.net/juju/+bug/1847128 how long did you wait for it to finish initialising?
<mup> Bug #1847128: [2.7] ceph-osd stuck in "agent initializing" <juju:New for hpidcock> <https://launchpad.net/bugs/1847128>
<timClicks> hpidcock: I deployed then ran juju-wait for ~30 mins
<hpidcock> oh wow
<timClicks> sent SIGINT then checked status
<hpidcock> yeah mine resolved within 5min
<hpidcock> really weird
<timClicks> is it possible that the snap build is somehow different than your env?
<hpidcock> yeah that's what I'm thinking
<timClicks> new tutorial up https://discourse.jujucharms.com/t/2189
<timClicks> worthy of some peer review if anyone has a minute or 2
<babbageclunk> wallyworld: review plz? https://github.com/juju/juju/pull/10704
<babbageclunk> I'm just testing it locally
<wallyworld> babbageclunk: looking
<babbageclunk> wallyworld: actually, trying it out it's really obnoxious - changing it to log once and then only every 30s
<wallyworld> ok
<babbageclunk> wallyworld: I have to drop now, I've sent a status update. Could you babysit the PR and build the snap?
<babbageclunk> wallyworld: oop, looks like you might be on the phone - ping me on telegram if you need me.
<wallyworld> babbageclunk: will do, i'll baby sit ty
<wallyworld> yeah otp
<timClicks> wallyworld: thought you might like to know https://discourse.jujucharms.com/t/new-emoji-available/2190
<hpidcock> wallyworld: PR for you to check out https://github.com/juju/juju/pull/10705
<wallyworld> hpidcock: will do
<wallyworld> timClicks: pretty
<timClicks> wallyworld: I know how much you love social media
<wallyworld> oh joy
<hpidcock> it's on 2.6 because I don't know if we want it there or in develop. Let me know.
<wallyworld> hpidcock: so you found a real issue
<wallyworld> we can land in 2.6 just in case
<hpidcock> well 2.6.10 would be broken
<wallyworld> as we have probs broken 2.6
<wallyworld> yep
<wallyworld> we may need to halt 2.6.10
<hpidcock> essentially all the matching logic is weird
<hpidcock> and it probably never worked as it originally was intended
<hpidcock> I just happened to break it
<wallyworld> so it worked by coincidence
<hpidcock> more
<hpidcock> yeah
<wallyworld> joy
<wallyworld> hpidcock: so the core issue was with dealing with planBlockInfo?
<hpidcock> yeah, if it didn't match it would just skip checking if any other unique fields match
<wallyworld> i thought plan info was just for oracle, hmmmm
<wallyworld> also i just left a comment, see what you thimnk
<wallyworld> hpidcock: got to go deal with something, i'll check PR again soon. need to digest the changes as it is quite fragile
<hpidcock> wallyworld: yeah agreed. I don't think it will break unless we are matching with some non-unique values... which would be concerning.
<babbageclunk> I don't think the golangci-lint people understand what deprecated means.
<babbageclunk> oh no, I take it back
<wallyworld> hpidcock: i left another comment on the PR, trying to grok the logic. is there an example of how it failed and how we fixed it?
<wallyworld> would simply removing the continues be enough, without the loop logic changes?
<wallyworld> it would be good to see what we were matching on to understand what tripped up the logic
<stickupkid> babbageclunk, haha
<nammn_de> is there a fast way to force a bootstrap to a specific juju version?
<nammn_de> e.g. client is 2.7, is there a way to force 2.6 or do i have to checkout 2.6 first?
<timClicks> --agent-version
<nammn_de> timClicks: my client then returns he can only bootstrap 2.7
<achilleasa> nammn_de: snap install juju?
<timClicks> nammn_de: reading the help text more carefully, it looks like this is only possible within the same minor release
<timClicks> nammn_de: snap refresh --channel=2.6/stable juju
<nammn_de> timClicks: thanks achilleasa: hmm yeah good idea, is usefull :D
<achilleasa> nammn_de: just be careful with paths, to be sure '/snap/bin/juju bootstrap...'
<timClicks> nammn_de: you can see which channels are available via `snap info juju`
<nammn_de> great thanks guys works like a charm, probably setting an alias for the snap juju to not get confused :D
<achilleasa> nammn_de: "works like a charm"... I see what you did there ;-)
<stickupkid> OR just setup go path to take precedence over snap path
<nammn_de> stickupkid: luckily it already does, puh
<nammn_de> ð
<stickupkid> :D
<nammn_de> someone might  wanna take a look at this "small change"? https://github.com/juju/juju/pull/10685
<Fallenour> hey icey manadart have you guys seen something like this before: https://conn-check.readthedocs.io/en/latest/tutorial-part-3.html
<Fallenour> $ # all checks on all units
<Fallenour> $ juju run --service my-service-conn-check 'actions/run-check'
<Fallenour> $ # all checks on just unit 0
<Fallenour> $ juju run --service my-service-conn-check/0 'actions/run-check'
<Fallenour> $ # nagios (not including no-nagios) checks on all units
<Fallenour> $ juju run --service my-service-conn-check 'actions/run-nagios-check'
<Fallenour> $ # nagios (not including no-nagios) checks on just unit 0
<Fallenour> $ juju run --service my-service-conn-check/0 'actions/run-nagios-check'
<Fallenour> Sorry for the wall of text everyone.
<Fallenour> rick_h, your thoughts?
<nammn_de> achilleasa stickupkid in upgrade code, do we have to expect someone upgrading from really old juju version (e.g. 2.0) using upstart to new 2.7 with systemd?
<hpidcock> wallyworld: removing the loops changes the priority intention that was originally there. I'll add a unit test tomorrow that matches what we were seeing.
<hpidcock> removing the continues*
<wallyworld> ok
<wallyworld> i'd be interested in the aws dev info etc as well
<wallyworld> to see how the bug manifested
<wallyworld> i need to look at code again. i did tink removing continues was all that was needed
<nammn_de> achilleasa: what command do I have to run to upgrade from 2.6 to current development juju with the cli?  Is it "juju upgrade-controller" will the upgrade steps defined run then?
<achilleasa> nammn_de: yes, just make sure that you use the correct cli version when upgrading
<nammn_de> e.g. for correct understanding: "juju2.6 bootstrap ...", followed by "$GOPATH/bin/juju upgrade-controller --build-agent" will trigger the upgradesteps to 2.7, right?
<achilleasa> I think that should do it...
<nammn_de> wallyworld: does it still work for you to attach to controllers with delve? For me it returns "could not find debug info", did we change a flag while building?
<nammn_de> wallyworld: ahh I need to set the debugflag to not omit debug information
<nammn_de> during trying to running the upgrade steps my model died between 2.6 and 2.7 because `ModelLogfileMaxSize` was added but didn't exist or so. Is that expected? For now I "just" added an upgradeStep to add that value
<hml> nammn_de: there should never be a failure during upgrade like that
<hml> nammn_de: does it happen upgrading without your change as well?
<nammn_de> hml: maybe i did some weird stuff, let me try that
<nammn_de> hml: yeah seems to be the case Let me get the pastebiun
<nammn_de> hml: what I did, bootstrap snap juju 2.6, get local code 2.7-beta, run `juju upgrade-controller --build-agent` without my upgrade code. Leads to the pastebin
<nammn_de>  server is down, given status, ssh into and digging into machine log leads to: https://pastebin.canonical.com/p/JpHV2fMSXH/
<hml> nammn_de:  thatâs a bug.  was this 2.6.8?
<hml> or 2.6.9 i guess the released version is
<nammn_de> 2.6.9
<nammn_de> from 2.6.9 -> 2.7-beta1
<rick_h> morning juju
<nammn_de> rick_h: morning
<nammn_de> hml: trying with the newest develop version, my last one was 9 commits behin or so
<nammn_de> yeah same, leads to machine down
<Fallenour> morning rick_h
<Fallenour> off the wall question, but does anyone have a diagram which can visually represent the place terraform holds in the CI/CD process
<rick_h> 758196
<hml> manadart:  i wonder if that defaultSpaceName arg was just poorly named.  That the intention was to have a default space on a per machine basis:  https://github.com/juju/juju/blob/develop/network/containerizer/bridgepolicy.go#L288  but the code never got there
<nammn_de> How do I upgrade a local agent from a model?
<hml> nammn_de:   did you run juju upgrade-juju after upgrade-controller?
<hml> nammn_de: every model is upgraded by the user the after controller is upgraded
<nammn_de> hml: does upgrade juju copies the jujud from the controller?
<hml> nammn_de: there is an agent (tools) tar ball which is copied and unpacked
<hml>  on each juju machine when juju upgrade-juju is called on the model
<pmatulis1> hml, upgrade-model, upgrade-model :)
<nammn_de> if i do "juju upgrade-controller --build-agent" my local one is deployed and installed, it also updates the tar ball, which then gets redistributed, right?
<hml> nammn_de: yes, but you only after you upgrade the model.  that line only upgrade the controller model
<nammn_de> hml: got it, thanks ð¦¸ââï¸
<nammn_de> someone want to take a look and mind giving some feedback? https://github.com/juju/juju/pull/10696 I am not 100% safe about the upgrade steps. It did seem to work, but cannot guarantee as they are not automated (?) I only added the upgrade step as it did not work otherwise. But it's all written in the commends in the PR
<stickupkid> nammn_de, send an email to thumper and ask him to review, I wonder why he didn't add them
<nammn_de> stickupkid: will do!
<timClicks> Fallenour: it would be really interesting to see whether that subordinate still functions
<timClicks> weirdly, there are no links in that docs site that point to the source code repository.. which makes the source code hard to inspect :/
#juju 2019-10-10
<babbageclunk> wallyworld: Here's the fix for bug 1847278: https://github.com/juju/juju/pull/10714
<mup> Bug #1847278: [vsphere] Juju does not respect --constraints root-disk-source to select appropriate datastore per node <juju:In Progress by 2-xtian> <https://launchpad.net/bugs/1847278>
<wallyworld> babbageclunk: looking
<babbageclunk> thanks!
<wallyworld> babbageclunk: +1, did you manage to get vsphere to play nicely to test?
<babbageclunk> yup yup
<wallyworld> \o/
<hpidcock> wallyworld: I've run the BlockDeviceMatching change against AWS, GCE and OpenStack with no issues. Do you think a run against vSphere is in order?
<wallyworld> given it broke once, it would be good if possible, just to be 10000% sure
<babbageclunk> wallyworld: tried extending a disk with the new govmomi, I still get a permission error waiting for the task
<babbageclunk> I'll check that it still works with the wait and push that change anyway
<wallyworld> damn, ok. bug time
<babbageclunk> actually, do you think I should upgrade the lib?
<babbageclunk> (I mean, push the change updating the dep)
<babbageclunk> wallyworld: ^
<wallyworld> does that help with the polling of the dsk size?
<wallyworld> if no, then i'd only do it for 2.7
<babbageclunk> yeah, makes sense
<babbageclunk> no difference for the extend disk task
<wallyworld> right, the errperm thing
<wallyworld> but we still need to fogure out how to poll
<babbageclunk> wallyworld: my plan is to get them to use the cli tool govc to dump details of a VM on the datastore of interest to see how the structure differs from ours
<babbageclunk> hopefully that would show what we should be looking at instead
<wallyworld> fingers crossed
<wallyworld> and hopefully an upstream bug will get some info as well
<wallyworld> maybe post the workaround we are using and see if anyone upstream replies to that
<babbageclunk> ok - will do both of those
<babbageclunk> need to go for a run though
<thumper> more robust controller config: https://github.com/juju/juju/pull/10715
<kelvinliu> wallyworld: should we allow charm to create more than extra service account or just one?
<wallyworld> > 1 i think is the requirement
<kelvinliu> wallyworld: ok, https://github.com/juju/juju/pull/10716 could u take a look this PR? thanks!
<wallyworld> sure, just pushing up one, will look in 5
<kelvinliu> wallyworld: np
<wallyworld> kelvinliu: here's a small k8s peer relation fix https://github.com/juju/juju/pull/10717
<kelvinliu> yup
<wallyworld> kelvinliu: so the PR supports 1 SA in the kubernetesResources but we need to support > 1, ie a list
<wallyworld> i believe that ken has operators that need > 1
<kelvinliu> ah, just saw u said ">" 1.. my bad eyes. changing to a slice
<wallyworld> \o/ ty
<kelvinliu> wallyworld: changed to a slice of sa in KubernetesResources.
<wallyworld> ok, looking
<kelvinliu> thx
<wallyworld> kelvinliu: +1 ty. we'll need to check with ken tomorrow to ensure it meets his needs
<wallyworld> hpidcock: you saw i +1 your pr?
<kelvinliu> wallyworld: thx,
<hpidcock> wallyworld: yep just want to get this vsphere run done + add a vsphere test in
<wallyworld> \o/ ty
<hpidcock> I don't want this to break again haha
<wallyworld> :-)
<wallyworld> and oracle
<hpidcock> already tested oracle
<wallyworld> kelvinliu: shouldn't mark the bug as fix committed until the PR lands :-)
<kelvinliu> wallyworld: ok..
<wallyworld> leave for now, but for next one...
<kelvinliu> changed 1847125 back to in progress already
<wallyworld> ok
<nammn_de_> any easy way to enforce upgrade steps run? Like a matcher whatever version I deploy to just run them again?
<manadart> Anyone able to review https://github.com/juju/juju/pull/10697 ?
<achilleasa> manadart: looking
<achilleasa> manadart: or jam : If a relation unit used to have an ingress entry in its settings and when I try to refresh the network info I can't find one anymore, should I just delete the old entry from the settings?
<achilleasa> (same question about egress-subnets)
<manadart> achilleasa: I'm a bit light on this particular area, but I can't think why it would go away unless the machine NICs actually changed...
<manadart> achilleasa: Actually, jump on a HO?
<achilleasa> manadart: I was thinking something like machine rebooted and NIC is not there anymore
<achilleasa> omw
<nammn_de_> achilleasa manadart any clue how to find out why "running machine configuration script" is taking ages, last log from `cloud-log-output.log` is "agent binaries downloaded succesfully. Its like half an hour for on lxd controller
<manadart> nammn_de_: Blocked upgrade will do that.
<nammn_de_> manadart: I did not run "upgrade-controller" how do I find out more to debug that?
<nammn_de_> was just bootstraping
<manadart> nammn_de_: Got a /var/log/juju/machine-0.log ?
<nammn_de_> nope not created yet
<nammn_de_> juju folder still empty, cloud-output seems to have finished. Trying to find oput where it is stuck
<bdx6> wallyworld: <3 #10717
<mup> Bug #10717: gnome-vfs2: new changes from Debian require merging <gnome-vfs2 (Ubuntu):Fix Released by seb128> <https://launchpad.net/bugs/10717>
<bdx6> heh - https://github.com/juju/juju/pull/10717/files
<bdx6> ^ big time thank you!
<hml> achilleasa: actuallyâ¦ i do have an endpoint binding question for you.  why do we sometimes have the default EB specified, and sometimes not.  Itâs not clear to me in these tests iâm fixing
<achilleasa> hml: test link? are they bundle-related tests?
<hml> achilleasa:  itâs all over, state, deploy (apiserver)
<hml> achilleasa:  mostly i see it, because the changes iâve made have caused the default empty string endpoint to be returned in some places it wasnât expected.
<achilleasa> hml: I wonder whether they were written assuming MAAS...
<hml> achilleasa: iâve hit code from 2013 nowâ¦ so who knows at this poit
<hml> point
<achilleasa> because otherwise they would end up un network.DefaultSpaceName, right?
<hml> achilleasa:  HO?
<achilleasa> sure, meet you in daily
<rick_h> bdx:  cool was going to send you that link but didn't know if you'd get bdx6 lol
<bdx> christmas came early this year
<bdx> you guys rock
<rick_h> bdx:  trying
<rick_h> bdx:  ty for the help in chasing that down, strange one
<bdx> np
<bdx> yeah, a sneaky one
<nammn_de_> hml: manadart: regarding our discussion from before, the old code does not seem to restart the agents:
<nammn_de_> https://github.com/juju/juju/blob/74c0afcf3714e602d2a1bbde8195a3cd9fe85802/upgrades/steps_245.go#L26
<nammn_de_> hml manadart: could it be that the models dont run the upgrade steps? I have upgraded the controller which did execute it (given log), then run "upgrade-model" they did not seem to run it, is there an option to do it?
<nammn_de_> the upgradesteps I mean
<hml> nammn_de_:  to confirm, the models you ran upgrade-model with updated their version numbers?
<nammn_de_> hml: yes they did. Juju status is returning the new version number
<nammn_de_> has it something todo with steps being defined as 2.7.0 and we run 2.7-beta or is that independent and should work regardless?
<hml> nammn_de_: there are a few different types of upgrade steps.  state and non (one other that shouldnât be necessary)
<nammn_de_> hml: reading the go doc i cannot make sense of the difference beween `stateUpgradeOperations` and  `upgradeOperations` Is former only run on the controller?
<hml> nammn_de_: the state ones is for updating the database on the controller in one go.
<hml> nammn_de_: looking at the other init changesâ¦ you might need the other, non state version
<hml> nammn_de_: so stepsFor27() instead of stateStepsFor27()
<nammn_de_> hm: I added to this slice as well and try it again, fi thats what you meant
<nammn_de_> hml ^
<hml> nammn_de_: check out upgrades/steps_24.go and steps_245.go
<nammn_de_> hml: I see, there actually seems to be a diff, should have looked closer thanks!
<nammn_de_> hml: but we don't have a kind of graph/code doc how upgrades are handled, right? Like which comes first, where we set the log and so on (?)
<hml> nammn_de_: we donât have a doc, the slices are executed in order.  depending on the order of the slice func.  can you expand on âwhere we set the logâ, iâm not following
<nammn_de_> hml: no that works perfect, I misswrote, sry. I mean lock :D
<hml> nammn_de_: those pieces happen automagically as part of  upgrades.  no additional locks should be necessary??
<nammn_de_> hml: yes, I assumed that. Just wanted to know more for background understanding
<hml> nammn_de_: :-)
<nammn_de_> hml: your tipp worked like a charm, thanks ð¦¸ââï¸!
<achilleasa> hml: if I have a machine instance do you know how can I update its addresses in the machine doc?
<achilleasa> "SetDevicesAddresses" changes docs in other collections
<hml> achilleasa:  huh, let me look at something then
<hml> achilleasa:  SetMachineAddresses()?
<hml> achilleasa:  though it might be a set allâ¦. not update
<achilleasa> doh! how did I miss that? thanks!
<hml> achilleasa: there are any methods with address in machine.  hahaha  :-)
<manadart> achilleasa: Note the difference between SetMachineAddresses (machine agent sourced) and SetProviderAddresses (from provider).
<achilleasa> yes, already stumbled on that :D
<achilleasa> I need the latter
<manadart> I'm not sure we really need MachineAddresses on that doc, but we can audit once things aren't in such flux.
<nammn_de_> manadart: regarding the adding a upgrade step for space model config value, is to set for existing models the default value?
<manadart> nammn_de_: Yes. If 2.7 assumes it's there, we need to set it for older models that are upgraded instead of bootstrapped anew.
<nammn_de_> manadart: got it, thanks!
<pmatulis1> hml, hi. when creating openstack image metadata for juju you need to supply a region. where does that value come from? is that from the region defined via 'juju add-cloud'? is it related to a "project" within openstack?
<hml> pmatulis1:  both.  they should be the same.
<pmatulis1> ohh
<hml> pmatulis1:  you can have multiple regions in the o7k cloud.  and in juju
<pmatulis1> o7k?
<hml> pmatulis1:  openstack
<hml> a shorthand of sorts iâve seen
<pmatulis1> ohh nice
<rick_h> k8s, o7k, a11y, wheeee
<pmatulis1> right right nice
<hml> or i18n
<hml> that one really saves you
<pmatulis1> what is it for? sorry
<rick_h> internationalization
<hml> rick_h: ty!  long type.
<hml> :-)
<pmatulis1> and a11y ?
<rick_h> that's my keyboard practive for the day
<rick_h> accessibility
<rick_h> e.g. https://a11yproject.com/
<pmatulis1> ah ha
<pmatulis1> hml, so when i do 'openstack project list' i get back 'admin' and 'service'. r u (heh) saying i need to use one of those with 'juju metadata generate-image'?
<rick_h> pmatulis1:  not projects, regions. Think us-east-1, us-east-2, etc
<hml> pmatulis1:  no, check your novarc file for an OS_REGION
<rick_h> pmatulis1:  you can have lots of projects, those are kind of "per user" tenants
<rick_h> pmatulis1:  but then you would deploy into your project resources from 1+ regions of the cloud (boston and chicago) or the like
<pmatulis1> right, but hml i thought said project and region should be the same above. i must have misunderstood
<hml> pmatulis1: no, there are different.
<hml> pmatulis1: they
<hml> pmatulis1:  you need to match whatâs in the cloud definition for your juju o7k cloud, when creating the image metadata
<hml> itâll do an exact string match
<pmatulis1> hml, ok, but there is no verification for that variable with add-cloud right?
<pmatulis1> so i need to make sure it's correct
<hml> pmatulis1:  there is some versionificationâ¦ to check that the URL is available
<hml> pmatulis1:  you can do add-cloud from the OS_ env var which should help
<pmatulis1> hml, yeah, i'm working with microstack and stuff is not as normal
<pmatulis1> it prolly should be though
<pmatulis1> i don't have OS_REGION in my env
<hml> pmatulis1:  what does the juju cloud definition look like?
<hml> hrmâ¦
<pmatulis1> hml, i decided to go cowboy and use 'localhost' since microstack is local. but in the end i've spoken to the developer and we're going to get OS_REGION hardcoded to 'microstack'
<pmatulis1> and expose that to the user
<hml> pmatulis1:  what happens if you leave the region blank?
<hml> on both the simplestreams and the juju cloud def?
<pmatulis1> hml, i haven't tried that
<pmatulis1> hml, juju command says "ERROR image region must be specified"
<pmatulis1> also, add-cloud doesn't work without that region line
<hml> pmatulis1:  rgr
<pmatulis1> https://bugs.launchpad.net/microstack/+bug/1847649 <-- hml
<mup> Bug #1847649: Generating image metadata for Juju could be clearer in terms of Region <MicroStack:New> <https://launchpad.net/bugs/1847649>
<hml> pmatulis1:  makes me wonder if anything else is different for add-cloud etc.
<nammn_de_> rick_h: wallyworld: as we were talking before. One of you might wanna take a look at this small one?
<nammn_de_> https://github.com/juju/juju/pull/10685
<rick_h> nammn_de_:  sorry sure thing
<nammn_de_> rick_h: No worries, gonna go now anyway. Btw my networ works again with all the beta upgrades :D
<rick_h> nammn_de_:  yay!
<rick_h> nammn_de_:  have a good night
<thumper> hmm... another day where I forget to close IRC
<thumper> morning team
<hml> morning thumper
<rick_h> morning thumper
<xavpaice> anyone know if there's a way to export the config of an application in a format compatible with "juju config mediawiki --file path/to/myconfig.yaml"?
<rick_h> xavpaice:  if you do the --format=yaml is it not acceptable?
<xavpaice> that gives me the same output as 'juju config $application', but if I pipe that to a file, make an edit, then juju config $application --filename theyaml.yaml, it's a totally different layout
<xavpaice> I mean, export-bundle, edit, then deploy, should be fine in most cases, just wanting an app config I can 'borrow' from one place and put in another
<rick_h> xavpaice:  yea, it should/would be that what you output you can accept
<xavpaice> juju config output is really good because it shows all the defaults, and the help, etc - but I can't use that to configure an app, needs a bunch of processing first
<rick_h> xavpaice:  yea, understand.
<rick_h> xavpaice:  the bundle path is closer but still not going to be right, as you say.
<rick_h> xavpaice:  almost want a juju config --values-only
<rick_h> or something
<xavpaice> yeah, exactly
<rick_h> xavpaice:  so not something we currently have but I'm open to a bug on that. In general we've been looking at places where we output one thing but then don't accept it as input
<xavpaice> anyway, just a thought - if it was there already, would be handy to know about it, but I guess it's not so I'll just copy/paste out of a bundle
<rick_h> xavpaice:  this falls under that umbrella for sure
<xavpaice> cool - will scribble up a wishlist bug
<rick_h> ty!
<hpidcock> https://github.com/juju/juju/pull/10721 2.6 to develop merge
#juju 2019-10-11
<wallyworld> kelvinliu_: or babbageclunk: no rush, a small Python PR https://github.com/juju/charm-helpers/pull/385
<babbageclunk> wallyworld: lgtm
<wallyworld> yay, ty
<kelvinliu_> lgtm as well
<kelvinliu_> hi babbageclunk can i get ur a few mins to help me to understand about raftlease plz?
<babbageclunk> kelvinliu_: sure - in standup?
<kelvinliu_> babbageclunk: yup thx
<wallyworld> thumper: not urgent, here's a PR which uses "function" terminology with the v3 feature flag for the actions CLI https://github.com/juju/juju/pull/10722
<wallyworld> so now we have functions and "call" and tasks
<wallyworld> at least on the CLI
<wallyworld> charms still have actions.yaml etc
<thumper> wallyworld: do you have the bug for the caas peer relation?
<wallyworld> thumper: https://bugs.launchpad.net/juju/+bug/1818230
<thumper> ta
<mup> Bug #1818230: k8s charm fails to access peer relation in peer relation-joined hook <juju:Fix Committed by wallyworld> <https://launchpad.net/bugs/1818230>
<wallyworld> kelvinliu_: how did you and babbageclunk get on with the raft/clock thing?
<kelvinliu_> wallyworld: we r in standup. mind join us?
<wallyworld> sure
<wallyworld> kelvinliu: https://github.com/juju/juju/pull/10723
<kelvinliu> wallyworld: lgtm thx!
<wallyworld> ty
<wallyworld> babbageclunk: hmmmm, maybe my X1 Extreme does seem a little quieter now....
<babbageclunk> lol
<wallyworld> even with magnificant Goland running
<hpidcock> babbageclunk's sounded like it was taking off at the sprint
<hpidcock> for other reasons
<babbageclunk> bloody gnome. Actually, that seems to have been better lately - maybe they fixed the bug
<hpidcock> did you have it when using i3?
<babbageclunk> no, but there were too many other things I couldn't do with i3 so I've switched back
<hpidcock> :O
<kelvinliu> wallyworld: should we error if the crd scope was cluster scope or just always overwrite to Namespaced peacefully?
<manadart> Anyone able to review https://github.com/juju/juju/pull/10684 ?
<achilleasa> manadart: I will trade you for https://github.com/juju/juju/pull/10725
<manadart> achilleasa: OK.
<manadart> achilleasa: My patch landed that changes the uniter around NetworksForRelation. This conflicts with your patch. Can you pull it down and fix?
<achilleasa> manadart: sure thing. I will rebase and force-push
<achilleasa> manadart: ready
<manadart> achilleasa: I see it. Ta.
<nammn_de> rick_h:  if you around, can we HO before/after daily? Have  Some small questions regarding caas and the pr we were talking before setting adm
<manadart> achilleasa: Reviewed.
<achilleasa> manadart: should the settings block return an error for nil or maybe simply skip over nil entries?
<manadart> achilleasa: I say return. If [0] is nil the write() call could panic.
<rick_h> nammn_de:  morning, sure thing
<rick_h> nammn_de:  meet you in daily?
<achilleasa> manadart: that's a valid point!
<nammn_de> rick_h: was having lunch, heading over daily
<icey> does `leader_set` stringify all values passed to it? ie: if I pass it a boolean, will I get back a string?
<icey> rick_h: maybe you know? ^
<rick_h> icey:  I'd expect so as it's a simple key/value vs types
<icey> :-/
<rick_h> icey:  since it's just bash data not sure what the python library is doing for type handling across the wire there
<rick_h> e.g. in a bash hook/etc it's just key=value
<icey> rick_h: apparently, stringing it up (https://github.com/juju/charm-helpers/blob/669821489497a547a768f686a2fadf88d2d5f2b2/charmhelpers/core/hookenv.py#L1121) :-/
<rick_h> icey:  yea, that's what I expected
<nammn_de> rick_h manadart hml regarding trying to upgrade a caas controller with juju. I tried to with " juju upgrade --build-agent" returns --build-agent is not supported for k8s model. Any idea how i can test those upgrades for caas else?
<rick_h> nammn_de:  actually not sure tbh
<rick_h> they work through the operator pod but I'm not familiar with the dev scenario around using it
<rick_h> nammn_de:  and a heads up, if you want to ping the team you can use the guild highlight nick
<nammn_de> ahhh now thats good to know
<nammn_de> probably need to sync with hpidcock wallyworld or kelvin once about caas
<rick_h> nammn_de:  yea, but make sure to spread the word as honestly I *should* know :(
<nammn_de> rick_h: will do :D
<manadart> nammn_de: I *think* you would need to do something like "DOCKER_USERNAME=<you> make microk8s-operator-update"
<manadart> Then just "juju upgrade-controller"
<nammn_de> manadart: tahnks gonna try that out! Do we have that somewhere documented in case in run into bad things?
<manadart> nammn_de: According to https://discourse.jujucharms.com/t/whats-new-in-juju-k8s-for-2-6/1431 it's just upgrade-controller, but that will assume that there is a new official operator image on dockerhub.
<manadart> So I am guessing for local wrangling you need to have built the new image and somehow let juju know where to get it from.
<rick_h> manadart:  right, that's the trick. Doing that with "my own operator" is the part I've not tried at all
<nammn_de> manadart: got it thanks
<manadart> achilleasa: I pushed that change to my upgrade patch too.
<achilleasa> manadart: looking
<achilleasa> manadart: much cleaner now! thanks for the change. Doing QA
<achilleasa> manadart: the machineaddresses collection has no spaceid anymore (addresses looks as expected). That's what I should be seeing right?
<manadart> achilleasa: Machine addresses has to-date never had space information (it only comes from the provider into addresses), so that is correct.
<achilleasa> manadart: PR approved
<manadart> achilleasa: Thanks.
<achilleasa> manadart: apparently accessing the unit from within Flush breaks a whole lot of uniter tests (in a different package).... :-(
<achilleasa> hml: got a few min to help me with the uniter test mess?
<hml> achilleasa:  sure
<achilleasa> hml: daily?
<hml> achilleasa: omw
<achilleasa> hml: removing the hardcoded stuff fixed the test...
<hml> achilleasa: hahahahaha  awesome
<achilleasa> I will replace the other occurrences as well
<achilleasa> hml: hopefully CI will be happy with my changes. I will wait for Joe to take a look before I land it so my PR is expected to land on Monday morn. Is that OK for your rebase work?
<hml> achilleasa:  that should be okay.
<pmatulis1> when i use option `--metadata-source` with the 'bootstrap' command i don't see anywhere that this value is exposed. should it not show up in the output to 'show-controller' command?
